[jira] [Updated] (HIVE-22083) Values of tag order cannot be null, so it can be "byte" instead of "Byte"

2019-08-06 Thread Ivan Suller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller updated HIVE-22083:
---
Attachment: HIVE-22083.2.patch

> Values of tag order cannot be null, so it can be "byte" instead of "Byte"
> -
>
> Key: HIVE-22083
> URL: https://issues.apache.org/jira/browse/HIVE-22083
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Minor
> Attachments: HIVE-22083.1.patch, HIVE-22083.2.patch
>
>
> Values of tag order cannot be null, so it can be "byte" instead of "Byte". 
> Switching between Byte and byte is "cheap" - the Byte objects are cached by 
> the JVM - but it still costs a bit more memory and CPU usage.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-08-06 Thread xiepengjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901655#comment-16901655
 ] 

xiepengjie commented on HIVE-22040:
---

Thanks [~jdere] for reviewing the patch.

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.03.patch, HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901602#comment-16901602
 ] 

Hive QA commented on HIVE-22074:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976876/HIVE-22074.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 16723 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=232)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18277/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18277/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18277/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976876 - PreCommit-HIVE-Build

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901582#comment-16901582
 ] 

Hive QA commented on HIVE-22074:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 2 new + 260 unchanged - 1 
fixed = 262 total (was 261) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18277/dev-support/hive-personality.sh
 |
| git revision | master / 333264b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18277/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18277/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22080) Prevent implicit conversion from String/char/varchar to double/decimal

2019-08-06 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901579#comment-16901579
 ] 

Jason Dere commented on HIVE-22080:
---

Just wanted to call out, the previous behavior had special cases where string 
=> double/decimal conversion was allowed while the other numeric types were 
disallowed, with hive.metastore.disallow.incompatible.col.type.changes=true. 
This just makes the conversion behavior consistent (disallowed) for all numeric 
types.
Users can always set 
hive.metastore.disallow.incompatible.col.type.changes=false if they want to be 
able to change the column type from string to numeric type.

> Prevent implicit conversion from String/char/varchar to double/decimal
> --
>
> Key: HIVE-22080
> URL: https://issues.apache.org/jira/browse/HIVE-22080
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22080.1.patch, HIVE-22080.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implicit conversion from String family types to any non-string family types 
> are invalid. User can force the conversion by turning off the setting 
> hive.metastore.disallow.incompatible.col.type.changes. If not turned off, 
> such a conversion should throw error.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22080) Prevent implicit conversion from String/char/varchar to double/decimal

2019-08-06 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901575#comment-16901575
 ] 

Jason Dere commented on HIVE-22080:
---

+1

> Prevent implicit conversion from String/char/varchar to double/decimal
> --
>
> Key: HIVE-22080
> URL: https://issues.apache.org/jira/browse/HIVE-22080
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22080.1.patch, HIVE-22080.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implicit conversion from String family types to any non-string family types 
> are invalid. User can force the conversion by turning off the setting 
> hive.metastore.disallow.incompatible.col.type.changes. If not turned off, 
> such a conversion should throw error.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22074?focusedWorklogId=290073&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290073
 ]

ASF GitHub Bot logged work on HIVE-22074:
-

Author: ASF GitHub Bot
Created on: 06/Aug/19 23:54
Start Date: 06/Aug/19 23:54
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #746: HIVE-22074: 
Slow compilation due to IN to OR transformation
URL: https://github.com/apache/hive/pull/746#discussion_r311319092
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
 ##
 @@ -1220,16 +1220,26 @@ protected ExprNodeDesc 
getXpathOrFuncExprNodeDesc(ASTNode expr,
 }
 outputOpList.add(nullConst);
   }
+
   if (!ctx.isCBOExecuted()) {
-ArrayList orOperands = 
TypeCheckProcFactoryUtils.rewriteInToOR(children);
-if (orOperands != null) {
-  if (orOperands.size() == 1) {
-orOperands.add(new 
ExprNodeConstantDesc(TypeInfoFactory.booleanTypeInfo, false));
+
+HiveConf conf;
+try {
+  conf = Hive.get().getConf();
 
 Review comment:
   Instead of obtaining the conf object here statically, let's just pass the 
int value using the ctx.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290073)
Time Spent: 0.5h  (was: 20m)

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22074?focusedWorklogId=290072&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290072
 ]

ASF GitHub Bot logged work on HIVE-22074:
-

Author: ASF GitHub Bot
Created on: 06/Aug/19 23:54
Start Date: 06/Aug/19 23:54
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #746: HIVE-22074: 
Slow compilation due to IN to OR transformation
URL: https://github.com/apache/hive/pull/746#discussion_r311318637
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/RexNodeConverter.java
 ##
 @@ -151,13 +153,15 @@ public RexNodeConverter(RelOptCluster cluster) {
   //subqueries will need outer query's row resolver
   public RexNodeConverter(RelOptCluster cluster, RelDataType inpDataType,
   ImmutableMap outerNameToPosMap,
-  ImmutableMap nameToPosMap, RowResolver hiveRR, 
RowResolver outerRR, int offset, boolean flattenExpr, int correlatedId) {
+  ImmutableMap nameToPosMap, RowResolver hiveRR, 
RowResolver outerRR,
+  HiveConf conf, int offset, boolean flattenExpr, int correlatedId) {
 
 Review comment:
   Can we pass the int value instead of the full config object?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290072)
Time Spent: 20m  (was: 10m)

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901556#comment-16901556
 ] 

Vineet Garg commented on HIVE-22074:


[~jcamachorodriguez] Can you take a look please? 
https://github.com/apache/hive/pull/746

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22074?focusedWorklogId=290061&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290061
 ]

ASF GitHub Bot logged work on HIVE-22074:
-

Author: ASF GitHub Bot
Created on: 06/Aug/19 23:29
Start Date: 06/Aug/19 23:29
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #746: 
HIVE-22074: Slow compilation due to IN to OR transformation
URL: https://github.com/apache/hive/pull/746
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290061)
Time Spent: 10m
Remaining Estimate: 0h

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22074:
--
Labels: pull-request-available  (was: )

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22074:
---
Attachment: HIVE-22074.4.patch

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22074:
---
Status: Open  (was: Patch Available)

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22074:
---
Status: Patch Available  (was: Open)

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch, HIVE-22074.4.patch
>
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21241) Migrate TimeStamp Parser From Joda Time

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901549#comment-16901549
 ] 

Hive QA commented on HIVE-21241:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976858/HIVE-21241.6.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16731 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18276/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18276/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18276/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976858 - PreCommit-HIVE-Build

> Migrate TimeStamp Parser From Joda Time
> ---
>
> Key: HIVE-21241
> URL: https://issues.apache.org/jira/browse/HIVE-21241
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21241.1.patch, HIVE-21241.2.patch, 
> HIVE-21241.3.patch, HIVE-21241.4.patch, HIVE-21241.5.patch, HIVE-21241.6.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive uses Joda time for its TimeStampParser.
> {quote}
> Joda-Time is the de facto standard date and time library for Java prior to 
> Java SE 8. Users are now asked to migrate to java.time (JSR-310).
> https://www.joda.org/joda-time/
> {quote}
> Migrate TimeStampParser to {{java.time}}
> I also added a couple new pre-canned timestamp parsers for convenience:
> * ISO 8601
> * RFC 1123



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21241) Migrate TimeStamp Parser From Joda Time

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901515#comment-16901515
 ] 

Hive QA commented on HIVE-21241:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} serde in master has 193 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} common: The patch generated 0 new + 0 unchanged - 14 
fixed = 0 total (was 14) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} The patch serde passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} common generated 0 new + 61 unchanged - 1 fixed = 61 
total (was 62) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} serde in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} common generated 0 new + 26 unchanged - 1 fixed = 26 
total (was 27) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} serde in the patch passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18276/dev-support/hive-personality.sh
 |
| git revision | master / 333264b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common serde U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18276/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Migrate TimeStamp Parser From Joda Time
> ---
>
> Key: HIVE-21241
> URL: https://issues.apache.org/jira/browse/HIVE-21241
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21241.1.patch, HIVE-21241.2.patch, 
> HIVE-21241.3.patch, HIVE-21241.4.patch, HIVE-21241.5.patch, HIVE-21241.6.p

[jira] [Commented] (HIVE-20568) There is no need to convert the dbname to pattern while pulling tablemeta

2019-08-06 Thread Rajkumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901460#comment-16901460
 ] 

Rajkumar Singh commented on HIVE-20568:
---

Hi [~cstenac], yes that's a right assumption, it is not only a cosmetic fix but 
indeed resolves the problem when Ranger skips the auth check if dbname has 
special char.

> There is no need to convert the dbname to pattern while pulling tablemeta
> -
>
> Key: HIVE-20568
> URL: https://issues.apache.org/jira/browse/HIVE-20568
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 0.4.0
> Environment: Hive-4,Java-8
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-20568.patch
>
>
> there is no need to convert the dbname to pattern, dbNamePattern is just a 
> dbName which we are passing to getTableMeta
> https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/GetTablesOperation.java#L117



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22080) Prevent implicit conversion from String/char/varchar to double/decimal

2019-08-06 Thread Ramesh Kumar Thangarajan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901453#comment-16901453
 ] 

Ramesh Kumar Thangarajan commented on HIVE-22080:
-

Pull request created [https://github.com/apache/hive/pull/745]

Hi [~jdere], Can you please help me review the change  in the pull request ?

Thanks,

Ramesh

> Prevent implicit conversion from String/char/varchar to double/decimal
> --
>
> Key: HIVE-22080
> URL: https://issues.apache.org/jira/browse/HIVE-22080
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22080.1.patch, HIVE-22080.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implicit conversion from String family types to any non-string family types 
> are invalid. User can force the conversion by turning off the setting 
> hive.metastore.disallow.incompatible.col.type.changes. If not turned off, 
> such a conversion should throw error.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22080) Prevent implicit conversion from String/char/varchar to double/decimal

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22080?focusedWorklogId=289985&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-289985
 ]

ASF GitHub Bot logged work on HIVE-22080:
-

Author: ASF GitHub Bot
Created on: 06/Aug/19 20:48
Start Date: 06/Aug/19 20:48
Worklog Time Spent: 10m 
  Work Description: ramesh0201 commented on pull request #745: HIVE-22080 
Prevent implicit conversion from String/char/varchar to double/decimal
URL: https://github.com/apache/hive/pull/745
 
 
   Changes for preventing implicit conversion from string family types to 
double/decimal types.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 289985)
Time Spent: 10m
Remaining Estimate: 0h

> Prevent implicit conversion from String/char/varchar to double/decimal
> --
>
> Key: HIVE-22080
> URL: https://issues.apache.org/jira/browse/HIVE-22080
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22080.1.patch, HIVE-22080.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implicit conversion from String family types to any non-string family types 
> are invalid. User can force the conversion by turning off the setting 
> hive.metastore.disallow.incompatible.col.type.changes. If not turned off, 
> such a conversion should throw error.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22080) Prevent implicit conversion from String/char/varchar to double/decimal

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22080:
--
Labels: pull-request-available  (was: )

> Prevent implicit conversion from String/char/varchar to double/decimal
> --
>
> Key: HIVE-22080
> URL: https://issues.apache.org/jira/browse/HIVE-22080
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22080.1.patch, HIVE-22080.2.patch
>
>
> Implicit conversion from String family types to any non-string family types 
> are invalid. User can force the conversion by turning off the setting 
> hive.metastore.disallow.incompatible.col.type.changes. If not turned off, 
> such a conversion should throw error.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-08-06 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-22040:
--
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Committed to hive master, thanks for the patch [~xiepengjie]

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.03.patch, HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21241) Migrate TimeStamp Parser From Joda Time

2019-08-06 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-21241:
--
Status: Patch Available  (was: Open)

> Migrate TimeStamp Parser From Joda Time
> ---
>
> Key: HIVE-21241
> URL: https://issues.apache.org/jira/browse/HIVE-21241
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21241.1.patch, HIVE-21241.2.patch, 
> HIVE-21241.3.patch, HIVE-21241.4.patch, HIVE-21241.5.patch, HIVE-21241.6.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive uses Joda time for its TimeStampParser.
> {quote}
> Joda-Time is the de facto standard date and time library for Java prior to 
> Java SE 8. Users are now asked to migrate to java.time (JSR-310).
> https://www.joda.org/joda-time/
> {quote}
> Migrate TimeStampParser to {{java.time}}
> I also added a couple new pre-canned timestamp parsers for convenience:
> * ISO 8601
> * RFC 1123



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21241) Migrate TimeStamp Parser From Joda Time

2019-08-06 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-21241:
--
Attachment: HIVE-21241.6.patch

> Migrate TimeStamp Parser From Joda Time
> ---
>
> Key: HIVE-21241
> URL: https://issues.apache.org/jira/browse/HIVE-21241
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21241.1.patch, HIVE-21241.2.patch, 
> HIVE-21241.3.patch, HIVE-21241.4.patch, HIVE-21241.5.patch, HIVE-21241.6.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive uses Joda time for its TimeStampParser.
> {quote}
> Joda-Time is the de facto standard date and time library for Java prior to 
> Java SE 8. Users are now asked to migrate to java.time (JSR-310).
> https://www.joda.org/joda-time/
> {quote}
> Migrate TimeStampParser to {{java.time}}
> I also added a couple new pre-canned timestamp parsers for convenience:
> * ISO 8601
> * RFC 1123



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21241) Migrate TimeStamp Parser From Joda Time

2019-08-06 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-21241:
--
Status: Open  (was: Patch Available)

> Migrate TimeStamp Parser From Joda Time
> ---
>
> Key: HIVE-21241
> URL: https://issues.apache.org/jira/browse/HIVE-21241
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21241.1.patch, HIVE-21241.2.patch, 
> HIVE-21241.3.patch, HIVE-21241.4.patch, HIVE-21241.5.patch, HIVE-21241.6.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive uses Joda time for its TimeStampParser.
> {quote}
> Joda-Time is the de facto standard date and time library for Java prior to 
> Java SE 8. Users are now asked to migrate to java.time (JSR-310).
> https://www.joda.org/joda-time/
> {quote}
> Migrate TimeStampParser to {{java.time}}
> I also added a couple new pre-canned timestamp parsers for convenience:
> * ISO 8601
> * RFC 1123



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22001) AcidUtils.getAcidState() can fail if Cleaner is removing files at the same time

2019-08-06 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-22001:
--
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Committed to master, thanks for review [~ashutoshc]

> AcidUtils.getAcidState() can fail if Cleaner is removing files at the same 
> time
> ---
>
> Key: HIVE-22001
> URL: https://issues.apache.org/jira/browse/HIVE-22001
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22001.1.patch
>
>
> Had one user hit the following error during getSplits
> {noformat}
> 2019-07-06T14:33:03,067 ERROR [4640181a-3eb7-4b3e-9a40-d7a8de9a570c 
> HiveServer2-HttpHandler-Pool: Thread-415519]: SessionState 
> (SessionState.java:printError(1247)) - Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1560947172646_2452_6199_00, diagnostics=[Vertex 
> vertex_1560947172646_2452_6199_00 [Map 1] killed/failed due 
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: hive_table initializer failed, 
> vertex=vertex_1560947172646_2452_6199_00 [Map 1], java.lang.RuntimeException: 
> ORC split generation failed with exception: java.io.FileNotFoundException: 
> File hdfs://path/to/hive_table/oiddatemmdd=20190706/delta_0987070_0987070 
> does not exist.
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1870)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1958)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:524)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:779)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.ExecutionException: 
> java.io.FileNotFoundException: File 
> hdfs://path/to/hive_table/oiddatemmdd=20190706/delta_0987070_0987070 does 
> not exist.
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1809)
> ... 17 more
> Caused by: java.io.FileNotFoundException: File 
> hdfs://path/to/hive_table/oiddatemmdd=20190706/delta_0987070_0987070 does 
> not exist.
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1059)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$1000(DistributedFileSystem.java:131)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1119)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1116)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1126)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1868)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1953)
> at 
> org.apache.hadoop.hiv

[jira] [Comment Edited] (HIVE-21376) Incompatible change in Hive bucket computation

2019-08-06 Thread Piotr Findeisen (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901291#comment-16901291
 ] 

Piotr Findeisen edited comment on HIVE-21376 at 8/6/19 7:52 PM:


bq. If bucketed data with those types has been written in 3.0 using v2, a user 
should recreate those bucketed tables using a more recent Hive version.

To me that means there is a disincentive to deploying Hive 3 in production 
until this issue is fixed.
It's fixed in 3.1.2, but the latest available from HDP is 3.1.0.

[~jcamachorodriguez] do you have a timeline when 3.1.2 will be available in HDP?



was (Author: findepi):
bq. If bucketed data with those types has been written in 3.0 using v2, a user 
should recreate those bucketed tables using a more recent Hive version.

To me that means Hive 3 should not be deployed on production until this issue 
is fixed.
It's fixed in 3.1.2, but the latest available from HDP is 3.1.0.

[~jcamachorodriguez] do you have a timeline when 3.1.2 will be available in HDP?


> Incompatible change in Hive bucket computation
> --
>
> Key: HIVE-21376
> URL: https://issues.apache.org/jira/browse/HIVE-21376
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: David Phillips
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0, 3.2.0, 3.1.2
>
> Attachments: HIVE-21376.01.patch, HIVE-21376.patch
>
>
> HIVE-20007 seems to have inadvertently changed the bucket hash code 
> computation via {{ObjectInspectorUtils.getBucketHashCodeOld()}} for the 
> {{DATE}} and {{TIMESTAMP}} data type2.
> {{DATE}} was previously computed using {{DateWritable}}, which uses 
> {{daysSinceEpoch}} as the hash code. It is now computed using 
> {{DateWritableV2}}, which uses the hash code of {{java.time.LocalDate}} 
> (which is not days since epoch).
> {{TIMESTAMP}} was previous computed using {{TimestampWritable}} and now uses 
> {{TimestampWritableV2}}. They ostensibly use the same hash code computation, 
> but there are two important differences:
>  # {{TimestampWritable}} rounds the number of milliseconds into the seconds 
> portion of the computation, but {{TimestampWritableV2}} does not.
>  # {{TimestampWritable}} gets the epoch time from {{java.sql.Timestamp}}, 
> which returns it relative to the JVM time zone, not UTC. 
> {{TimestampWritableV2}} uses a {{LocalDateTime}} relative to UTC.
> I was unable to get Hive 3.1 running in order to verify if this actually 
> causes data to be read or written incorrectly (there may be code above this 
> library method which makes things work correctly). However, if my 
> understanding is correct, this means Hive 3.1 is both forwards and backwards 
> incompatible with bucketed tables using either of these data types. It also 
> indicates that Hive needs tests to verify that the hash code does not change 
> between releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-20683) Add the Ability to push Dynamic Between and Bloom filters to Druid

2019-08-06 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901406#comment-16901406
 ] 

Nishant Bangarwa commented on HIVE-20683:
-

[~jcamachorodriguez] this seems ready to be pushed, unless you have any 
comments ?

> Add the Ability to push Dynamic Between and Bloom filters to Druid
> --
>
> Key: HIVE-20683
> URL: https://issues.apache.org/jira/browse/HIVE-20683
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20683.1.patch, HIVE-20683.2.patch, 
> HIVE-20683.3.patch, HIVE-20683.4.patch, HIVE-20683.5.patch, HIVE-20683.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For optimizing joins, Hive generates BETWEEN filter with min-max and BLOOM 
> filter for filtering one side of semi-join.
> Druid 0.13.0 will have support for Bloom filters (Added via 
> https://github.com/apache/incubator-druid/pull/6222)
> Implementation details - 
> # Hive generates and passes the filters as part of 'filterExpr' in TableScan. 
> # DruidQueryBasedRecordReader gets this filter passed as part of the conf. 
> # During execution phase, before sending the query to druid in 
> DruidQueryBasedRecordReader we will deserialize this filter, translate it 
> into a DruidDimFilter and add it to existing DruidQuery.  Tez executor 
> already ensures that when we start reading results from the record reader, 
> all the dynamic values are initialized. 
> # Explaining a druid query also prints the query sent to druid as 
> {{druid.json.query}}. We also need to make sure to update the druid query 
> with the filters. During explain we do not have the actual values for the 
> dynamic values, so instead of values we will print the dynamic expression 
> itself as part of druid query. 
> Note:- This work needs druid to be updated to version 0.13.0



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-20683) Add the Ability to push Dynamic Between and Bloom filters to Druid

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901382#comment-16901382
 ] 

Hive QA commented on HIVE-20683:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976832/HIVE-20683.5.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 16692 tests 
executed
*Failed tests:*
{noformat}
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=232)
TestObjectStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[query1b]
 (batchId=296)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18275/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18275/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18275/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976832 - PreCommit-HIVE-Build

> Add the Ability to push Dynamic Between and Bloom filters to Druid
> --
>
> Key: HIVE-20683
> URL: https://issues.apache.org/jira/browse/HIVE-20683
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20683.1.patch, HIVE-20683.2.patch, 
> HIVE-20683.3.patch, HIVE-20683.4.patch, HIVE-20683.5.patch, HIVE-20683.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For optimizing joins, Hive generates BETWEEN filter with min-max and BLOOM 
> filter for filtering one side of semi-join.
> Druid 0.13.0 will have support for Bloom filters (Added via 
> https://github.com/apache/incubator-druid/pull/6222)
> Implementation details - 
> # Hive generates and passes the filters as part of 'filterExpr' in TableScan. 
> # DruidQueryBasedRecordReader gets this filter passed as part of the conf. 
> # During execution phase, before sending the query to druid in 
> DruidQueryBasedRecordReader we will deserialize this filter, translate it 
> into a DruidDimFilter and add it to existing DruidQuery.  Tez executor 
> already ensures that when we start reading results from the record reader, 
> all the dynamic values are initialized. 
> # Explaining a druid query also prints the query sent to druid as 
> {{druid.json.query}}. We also need to make sure to update the druid query 
> with the filters. During explain we do not have the actual values for the 
> dynamic values, so instead of values we will print the dynamic expression 
> itself as part of druid query. 
> Note:- This work needs druid to be updated to version 0.13.0



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-20683) Add the Ability to push Dynamic Between and Bloom filters to Druid

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901372#comment-16901372
 ] 

Hive QA commented on HIVE-20683:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} druid-handler in master has 3 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
20s{color} | {color:blue} itests/qtest-druid in master has 7 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18275/dev-support/hive-personality.sh
 |
| git revision | master / 5737095 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql druid-handler . itests itests/qtest-druid U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18275/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add the Ability to push Dynamic Between and Bloom filters to Druid
> --
>
> Key: HIVE-20683
> URL: https://issues.apache.org/jira/browse/HIVE-20683
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20683.1.patch, HIVE-20683.2.patch, 
> HIVE-20683.3.patch, HIVE-20683.4.patch, HIVE-20683.5.patch, HIVE-20683.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For optimizing joins, Hive generates BETWEEN filter with min-max and BLOOM 
> filter for filtering one side of semi-join.
> Druid 0.13.0 will have support for Bloom filters 

[jira] [Commented] (HIVE-21376) Incompatible change in Hive bucket computation

2019-08-06 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901324#comment-16901324
 ] 

Jesus Camacho Rodriguez commented on HIVE-21376:


[~findepi], let's not discuss HDP vs Apache here as I think it is only going to 
confuse the community since HDP version is not 1:1 match to any Apache version. 
DM if you have more questions about HDP.

> Incompatible change in Hive bucket computation
> --
>
> Key: HIVE-21376
> URL: https://issues.apache.org/jira/browse/HIVE-21376
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: David Phillips
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0, 3.2.0, 3.1.2
>
> Attachments: HIVE-21376.01.patch, HIVE-21376.patch
>
>
> HIVE-20007 seems to have inadvertently changed the bucket hash code 
> computation via {{ObjectInspectorUtils.getBucketHashCodeOld()}} for the 
> {{DATE}} and {{TIMESTAMP}} data type2.
> {{DATE}} was previously computed using {{DateWritable}}, which uses 
> {{daysSinceEpoch}} as the hash code. It is now computed using 
> {{DateWritableV2}}, which uses the hash code of {{java.time.LocalDate}} 
> (which is not days since epoch).
> {{TIMESTAMP}} was previous computed using {{TimestampWritable}} and now uses 
> {{TimestampWritableV2}}. They ostensibly use the same hash code computation, 
> but there are two important differences:
>  # {{TimestampWritable}} rounds the number of milliseconds into the seconds 
> portion of the computation, but {{TimestampWritableV2}} does not.
>  # {{TimestampWritable}} gets the epoch time from {{java.sql.Timestamp}}, 
> which returns it relative to the JVM time zone, not UTC. 
> {{TimestampWritableV2}} uses a {{LocalDateTime}} relative to UTC.
> I was unable to get Hive 3.1 running in order to verify if this actually 
> causes data to be read or written incorrectly (there may be code above this 
> library method which makes things work correctly). However, if my 
> understanding is correct, this means Hive 3.1 is both forwards and backwards 
> incompatible with bucketed tables using either of these data types. It also 
> indicates that Hive needs tests to verify that the hash code does not change 
> between releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HIVE-21376) Incompatible change in Hive bucket computation

2019-08-06 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901324#comment-16901324
 ] 

Jesus Camacho Rodriguez edited comment on HIVE-21376 at 8/6/19 5:37 PM:


[~findepi], let's not discuss HDP vs Apache here as I think it is only going to 
confuse the community since HDP version is not 1:1 match to the Apache version. 
DM if you have more questions about HDP.


was (Author: jcamachorodriguez):
[~findepi], let's not discuss HDP vs Apache here as I think it is only going to 
confuse the community since HDP version is not 1:1 match to any Apache version. 
DM if you have more questions about HDP.

> Incompatible change in Hive bucket computation
> --
>
> Key: HIVE-21376
> URL: https://issues.apache.org/jira/browse/HIVE-21376
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: David Phillips
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0, 3.2.0, 3.1.2
>
> Attachments: HIVE-21376.01.patch, HIVE-21376.patch
>
>
> HIVE-20007 seems to have inadvertently changed the bucket hash code 
> computation via {{ObjectInspectorUtils.getBucketHashCodeOld()}} for the 
> {{DATE}} and {{TIMESTAMP}} data type2.
> {{DATE}} was previously computed using {{DateWritable}}, which uses 
> {{daysSinceEpoch}} as the hash code. It is now computed using 
> {{DateWritableV2}}, which uses the hash code of {{java.time.LocalDate}} 
> (which is not days since epoch).
> {{TIMESTAMP}} was previous computed using {{TimestampWritable}} and now uses 
> {{TimestampWritableV2}}. They ostensibly use the same hash code computation, 
> but there are two important differences:
>  # {{TimestampWritable}} rounds the number of milliseconds into the seconds 
> portion of the computation, but {{TimestampWritableV2}} does not.
>  # {{TimestampWritable}} gets the epoch time from {{java.sql.Timestamp}}, 
> which returns it relative to the JVM time zone, not UTC. 
> {{TimestampWritableV2}} uses a {{LocalDateTime}} relative to UTC.
> I was unable to get Hive 3.1 running in order to verify if this actually 
> causes data to be read or written incorrectly (there may be code above this 
> library method which makes things work correctly). However, if my 
> understanding is correct, this means Hive 3.1 is both forwards and backwards 
> incompatible with bucketed tables using either of these data types. It also 
> indicates that Hive needs tests to verify that the hash code does not change 
> between releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21543) Use FilterHooks for show compactions

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901319#comment-16901319
 ] 

Hive QA commented on HIVE-21543:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12964192/HIVE-21543.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16723 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18274/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18274/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18274/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12964192 - PreCommit-HIVE-Build

> Use FilterHooks for show compactions
> 
>
> Key: HIVE-21543
> URL: https://issues.apache.org/jira/browse/HIVE-21543
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21543.01.patch
>
>
> Use FilterHooks for checking dbs/tables/partitions for showCompactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21376) Incompatible change in Hive bucket computation

2019-08-06 Thread Piotr Findeisen (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901291#comment-16901291
 ] 

Piotr Findeisen commented on HIVE-21376:


bq. If bucketed data with those types has been written in 3.0 using v2, a user 
should recreate those bucketed tables using a more recent Hive version.

To me that means Hive 3 should not be deployed on production until this issue 
is fixed.
It's fixed in 3.1.2, but the latest available from HDP is 3.1.0.

[~jcamachorodriguez] do you have a timeline when 3.1.2 will be available in HDP?


> Incompatible change in Hive bucket computation
> --
>
> Key: HIVE-21376
> URL: https://issues.apache.org/jira/browse/HIVE-21376
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0
>Reporter: David Phillips
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0, 3.2.0, 3.1.2
>
> Attachments: HIVE-21376.01.patch, HIVE-21376.patch
>
>
> HIVE-20007 seems to have inadvertently changed the bucket hash code 
> computation via {{ObjectInspectorUtils.getBucketHashCodeOld()}} for the 
> {{DATE}} and {{TIMESTAMP}} data type2.
> {{DATE}} was previously computed using {{DateWritable}}, which uses 
> {{daysSinceEpoch}} as the hash code. It is now computed using 
> {{DateWritableV2}}, which uses the hash code of {{java.time.LocalDate}} 
> (which is not days since epoch).
> {{TIMESTAMP}} was previous computed using {{TimestampWritable}} and now uses 
> {{TimestampWritableV2}}. They ostensibly use the same hash code computation, 
> but there are two important differences:
>  # {{TimestampWritable}} rounds the number of milliseconds into the seconds 
> portion of the computation, but {{TimestampWritableV2}} does not.
>  # {{TimestampWritable}} gets the epoch time from {{java.sql.Timestamp}}, 
> which returns it relative to the JVM time zone, not UTC. 
> {{TimestampWritableV2}} uses a {{LocalDateTime}} relative to UTC.
> I was unable to get Hive 3.1 running in order to verify if this actually 
> causes data to be read or written incorrectly (there may be code above this 
> library method which makes things work correctly). However, if my 
> understanding is correct, this means Hive 3.1 is both forwards and backwards 
> incompatible with bucketed tables using either of these data types. It also 
> indicates that Hive needs tests to verify that the hash code does not change 
> between releases.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-20683) Add the Ability to push Dynamic Between and Bloom filters to Druid

2019-08-06 Thread Nishant Bangarwa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated HIVE-20683:

Attachment: HIVE-20683.5.patch

> Add the Ability to push Dynamic Between and Bloom filters to Druid
> --
>
> Key: HIVE-20683
> URL: https://issues.apache.org/jira/browse/HIVE-20683
> Project: Hive
>  Issue Type: New Feature
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20683.1.patch, HIVE-20683.2.patch, 
> HIVE-20683.3.patch, HIVE-20683.4.patch, HIVE-20683.5.patch, HIVE-20683.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For optimizing joins, Hive generates BETWEEN filter with min-max and BLOOM 
> filter for filtering one side of semi-join.
> Druid 0.13.0 will have support for Bloom filters (Added via 
> https://github.com/apache/incubator-druid/pull/6222)
> Implementation details - 
> # Hive generates and passes the filters as part of 'filterExpr' in TableScan. 
> # DruidQueryBasedRecordReader gets this filter passed as part of the conf. 
> # During execution phase, before sending the query to druid in 
> DruidQueryBasedRecordReader we will deserialize this filter, translate it 
> into a DruidDimFilter and add it to existing DruidQuery.  Tez executor 
> already ensures that when we start reading results from the record reader, 
> all the dynamic values are initialized. 
> # Explaining a druid query also prints the query sent to druid as 
> {{druid.json.query}}. We also need to make sure to update the druid query 
> with the filters. During explain we do not have the actual values for the 
> dynamic values, so instead of values we will print the dynamic expression 
> itself as part of druid query. 
> Note:- This work needs druid to be updated to version 0.13.0



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21543) Use FilterHooks for show compactions

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901264#comment-16901264
 ] 

Hive QA commented on HIVE-21543:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
27s{color} | {color:blue} standalone-metastore/metastore-common in master has 
31 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
10s{color} | {color:blue} standalone-metastore/metastore-server in master has 
180 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} standalone-metastore/metastore-common: The patch 
generated 4 new + 212 unchanged - 0 fixed = 216 total (was 212) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
39s{color} | {color:red} standalone-metastore/metastore-common generated 1 new 
+ 31 unchanged - 0 fixed = 32 total (was 31) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
17s{color} | {color:red} standalone-metastore_metastore-common generated 2 new 
+ 51 unchanged - 0 fixed = 53 total (was 51) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-common |
|  |  
org.apache.hadoop.hive.metastore.utils.FilterUtils.filterCompactionsIfEnabled(boolean,
 MetaStoreFilterHook, String, List) makes inefficient use of keySet iterator 
instead of entrySet iterator  At FilterUtils.java:inefficient use of keySet 
iterator instead of entrySet iterator  At FilterUtils.java:[line 427] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18274/dev-support/hive-personality.sh
 |
| git revision | master / 5737095 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18274/yetus/diff-checkstyle-standalone-metastore_metastore-common.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18274/yetus/whitespace-eol.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18274/yetus/whitespace-tabs.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18274/yetus/new-findbugs-standalone-metastore_metastore-common.html
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-1

[jira] [Commented] (HIVE-21875) Implement drop partition related methods on temporary tables

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901217#comment-16901217
 ] 

Hive QA commented on HIVE-21875:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976796/HIVE-21875.02.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16795 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18273/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18273/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18273/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976796 - PreCommit-HIVE-Build

> Implement drop partition related methods on temporary tables
> 
>
> Key: HIVE-21875
> URL: https://issues.apache.org/jira/browse/HIVE-21875
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-21875.01.patch, HIVE-21875.02.patch
>
>
> IMetaStoreClient exposes the following methods related to dropping partitions:
> {code:java}
> boolean dropPartition(String db_name, String tbl_name, List 
> part_vals, boolean deleteData);
> boolean dropPartition(String catName, String db_name, String tbl_name, 
> List part_vals, boolean deleteData);
> boolean dropPartition(String db_name, String tbl_name, List 
> part_vals, PartitionDropOptions options);
> boolean dropPartition(String catName, String db_name, String tbl_name, 
> List part_vals, PartitionDropOptions options);
> List dropPartitions(String dbName, String tblName, 
> List> partExprs, boolean deleteData, boolean 
> ifExists);
> List dropPartitions(String catName, String dbName, String tblName, 
> List> partExprs, boolean deleteData, boolean 
> ifExists);
> List dropPartitions(String dbName, String tblName, 
> List> partExprs, boolean deleteData, boolean 
> ifExists, boolean needResults);
> List dropPartitions(String catName, String dbName, String tblName, 
> List> partExprs, boolean deleteData, boolean 
> ifExists, boolean needResults);
> List dropPartitions(String dbName, String tblName, 
> List> partExprs, PartitionDropOptions options);
> List dropPartitions(String catName, String dbName, String tblName, 
> List> partExprs, PartitionDropOptions options);
> boolean dropPartition(String db_name, String tbl_name, String name, boolean 
> deleteData);
> boolean dropPartition(String catName, String db_name, String tbl_name, String 
> name, boolean deleteData){code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21241) Migrate TimeStamp Parser From Joda Time

2019-08-06 Thread Naveen Gangam (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901191#comment-16901191
 ] 

Naveen Gangam commented on HIVE-21241:
--

[~belugabehr] Going forward, could you also post the patch to reviewboard at 
reviews.apache.org and post the link here. It is easier to add comments on 
reviewboard than here.
Some comments + nits
1) {{LOG.debug("Could not parse timestamp text: {}", text);}}
should this be info instead of debug? 

2) Looks like this public static class has been removed, which is fine in a new 
release.
public static class MillisDateFormatParser implements DateTimeParser {
Should this be a public class that is visible to everyone given this is not 
being used outside this class?
public static class DefaultingTemporalAccessor implements TemporalAccessor {

3) we are removing support for a public constructor. 
public TimestampParser(TimestampParser tsParser) {
any chance we can retro-fit this? I am always weary of removing public 
constructors .. never know what code, that we are unaware of, has been using it.

Rest looks good.

> Migrate TimeStamp Parser From Joda Time
> ---
>
> Key: HIVE-21241
> URL: https://issues.apache.org/jira/browse/HIVE-21241
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21241.1.patch, HIVE-21241.2.patch, 
> HIVE-21241.3.patch, HIVE-21241.4.patch, HIVE-21241.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive uses Joda time for its TimeStampParser.
> {quote}
> Joda-Time is the de facto standard date and time library for Java prior to 
> Java SE 8. Users are now asked to migrate to java.time (JSR-310).
> https://www.joda.org/joda-time/
> {quote}
> Migrate TimeStampParser to {{java.time}}
> I also added a couple new pre-canned timestamp parsers for convenience:
> * ISO 8601
> * RFC 1123



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21875) Implement drop partition related methods on temporary tables

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901184#comment-16901184
 ] 

Hive QA commented on HIVE-21875:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
15s{color} | {color:blue} standalone-metastore/metastore-server in master has 
180 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} standalone-metastore/metastore-server: The patch 
generated 0 new + 1 unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18273/dev-support/hive-personality.sh
 |
| git revision | master / 5737095 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18273/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement drop partition related methods on temporary tables
> 
>
> Key: HIVE-21875
> URL: https://issues.apache.org/jira/browse/HIVE-21875
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-21875.01.patch, HIVE-21875.02.patch
>
>
> IMetaStoreClient exposes the following methods related to dropping partitions:
> {code:java}
> boolean dropPartition(String db_name, String tbl_name, List 
> part_vals, boolean deleteData);
> boolean dropPartition(String catName, String db_name, String tbl_name, 
> List part_vals, boolean deleteData);
> boolean dropPartition(String db_name, String tbl_name, List 
> part_vals, PartitionDropOptions options);
> boolean dropPartition(String catName, S

[jira] [Commented] (HIVE-22083) Values of tag order cannot be null, so it can be "byte" instead of "Byte"

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901132#comment-16901132
 ] 

Hive QA commented on HIVE-22083:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976698/HIVE-22083.1.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 16723 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query23] 
(batchId=298)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query88] 
(batchId=298)
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=232)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18272/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18272/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18272/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 16 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976698 - PreCommit-HIVE-Build

> Values of tag order cannot be null, so it can be "byte" instead of "Byte"
> -
>
> Key: HIVE-22083
> URL: https://issues.apache.org/jira/browse/HIVE-22083
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Minor
> Attachments: HIVE-22083.1.patch
>
>
> Values of tag order cannot be null, so it can be "byte" instead of "Byte". 
> Switching between Byte and byte is "cheap" - the Byte objects are cached by 
> the JVM - but it still costs a bit more memory and CPU usage.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22083) Values of tag order cannot be null, so it can be "byte" instead of "Byte"

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901082#comment-16901082
 ] 

Hive QA commented on HIVE-22083:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 8 new + 601 unchanged - 51 
fixed = 609 total (was 652) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
14s{color} | {color:red} ql generated 5 new + 2238 unchanged - 12 fixed = 2243 
total (was 2250) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Dead store to evaluatorWindowFrameDefs in 
org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.createVectorPTFInfo(Operator,
 PTFDesc, VectorizationContext, VectorPTFDesc)  At 
Vectorizer.java:org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.createVectorPTFInfo(Operator,
 PTFDesc, VectorizationContext, VectorPTFDesc)  At Vectorizer.java:[line 4972] |
|  |  org.apache.hadoop.hive.ql.plan.HashTableSinkDesc.getTagOrder() may expose 
internal representation by returning HashTableSinkDesc.tagOrder  At 
HashTableSinkDesc.java:by returning HashTableSinkDesc.tagOrder  At 
HashTableSinkDesc.java:[line 263] |
|  |  org.apache.hadoop.hive.ql.plan.HashTableSinkDesc.setTagOrder(byte[]) may 
expose internal representation by storing an externally mutable object into 
HashTableSinkDesc.tagOrder  At HashTableSinkDesc.java:by storing an externally 
mutable object into HashTableSinkDesc.tagOrder  At HashTableSinkDesc.java:[line 
268] |
|  |  org.apache.hadoop.hive.ql.plan.JoinDesc.getTagOrder() may expose internal 
representation by returning JoinDesc.tagOrder  At JoinDesc.java:by returning 
JoinDesc.tagOrder  At JoinDesc.java:[line 419] |
|  |  org.apache.hadoop.hive.ql.plan.JoinDesc.setTagOrder(byte[]) may expose 
internal representation by storing an externally mutable object into 
JoinDesc.tagOrder  At JoinDesc.java:by storing an externally mutable object 
into JoinDesc.tagOrder  At JoinDesc.java:[line 429] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18272/dev-support/hive-personality.sh
 |
| git revision | master / 5737095 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18272/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18272/yetus/new-findbugs-ql.html
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18272/yetus.txt |
| Powered by | Apac

[jira] [Commented] (HIVE-21241) Migrate TimeStamp Parser From Joda Time

2019-08-06 Thread David Mollitor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901046#comment-16901046
 ] 

David Mollitor commented on HIVE-21241:
---

[~ngangam]

> Migrate TimeStamp Parser From Joda Time
> ---
>
> Key: HIVE-21241
> URL: https://issues.apache.org/jira/browse/HIVE-21241
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21241.1.patch, HIVE-21241.2.patch, 
> HIVE-21241.3.patch, HIVE-21241.4.patch, HIVE-21241.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive uses Joda time for its TimeStampParser.
> {quote}
> Joda-Time is the de facto standard date and time library for Java prior to 
> Java SE 8. Users are now asked to migrate to java.time (JSR-310).
> https://www.joda.org/joda-time/
> {quote}
> Migrate TimeStampParser to {{java.time}}
> I also added a couple new pre-canned timestamp parsers for convenience:
> * ISO 8601
> * RFC 1123



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-7044) ORC Vector: column of empty strings is read back as null

2019-08-06 Thread Hui An (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-7044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901031#comment-16901031
 ] 

Hui An commented on HIVE-7044:
--

Looks like this bug has already fixed in branch-3.1, but I do think we should 
add this test file to the project.

> ORC Vector: column of empty strings is read back as null
> 
>
> Key: HIVE-7044
> URL: https://issues.apache.org/jira/browse/HIVE-7044
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 0.13.0
>Reporter: Dain Sundstrom
>Assignee: Jitendra Nath Pandey
>Priority: Blocker
>  Labels: orcfile, vector
> Attachments: TestOrcEmptyString.java
>
>
> If I write a column of empty string values, the vectorized read code returns 
> a vector of nulls, but the non-vectorized code returns the correct values.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Issue Comment Deleted] (HIVE-7044) ORC Vector: column of empty strings is read back as null

2019-08-06 Thread Hui An (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-7044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui An updated HIVE-7044:
-
Comment: was deleted

(was: Looks like this bug has already fixed in branch-3.1, but I do think we 
should add this test file to the project.)

> ORC Vector: column of empty strings is read back as null
> 
>
> Key: HIVE-7044
> URL: https://issues.apache.org/jira/browse/HIVE-7044
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 0.13.0
>Reporter: Dain Sundstrom
>Assignee: Jitendra Nath Pandey
>Priority: Blocker
>  Labels: orcfile, vector
> Attachments: TestOrcEmptyString.java
>
>
> If I write a column of empty string values, the vectorized read code returns 
> a vector of nulls, but the non-vectorized code returns the correct values.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901019#comment-16901019
 ] 

Hive QA commented on HIVE-22040:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976777/HIVE-22040.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16723 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18270/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18270/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18270/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976777 - PreCommit-HIVE-Build

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.03.patch, HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22037) HS2 should log when shutting down due to OOM

2019-08-06 Thread Adam Szita (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900998#comment-16900998
 ] 

Adam Szita commented on HIVE-22037:
---

Committed to master. Thanks for the fix!

> HS2 should log when shutting down due to OOM
> 
>
> Key: HIVE-22037
> URL: https://issues.apache.org/jira/browse/HIVE-22037
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Barnabas Maidics
>Assignee: Barnabas Maidics
>Priority: Major
> Attachments: HIVE-22037.2.patch, HIVE-22037.3.patch, HIVE-22037.patch
>
>
> Currently, if HS2 runs into OOM issue, ThreadPoolExecutorWithOomHook kicks in 
> and runs oomHook, which will stop HS2. Everything happens without logging. In 
> the log, you can only see, that HS2 stopped. 
> We should log the stacktrace. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22037) HS2 should log when shutting down due to OOM

2019-08-06 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita updated HIVE-22037:
--
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

> HS2 should log when shutting down due to OOM
> 
>
> Key: HIVE-22037
> URL: https://issues.apache.org/jira/browse/HIVE-22037
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Barnabas Maidics
>Assignee: Barnabas Maidics
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22037.2.patch, HIVE-22037.3.patch, HIVE-22037.patch
>
>
> Currently, if HS2 runs into OOM issue, ThreadPoolExecutorWithOomHook kicks in 
> and runs oomHook, which will stop HS2. Everything happens without logging. In 
> the log, you can only see, that HS2 stopped. 
> We should log the stacktrace. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21114) Create read-only transactions

2019-08-06 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900980#comment-16900980
 ] 

Peter Vary commented on HIVE-21114:
---

[~ikryvenko]: Any news on this change? Still looks interesting, and would be 
very useful.

> Create read-only transactions
> -
>
> Key: HIVE-21114
> URL: https://issues.apache.org/jira/browse/HIVE-21114
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Major
>
> With HIVE-21036 we have a way to indicate that a txn is read only.
> We should (at least in auto-commit mode) determine if the single stmt is a 
> read and mark the txn accordingly.  
> Then we can optimize {{TxnHandler.commitTxn()}} so that it doesn't do any 
> checks in write_set etc.
> {{TxnHandler.commitTxn()}} already starts with {{lockTransactionRecord(stmt, 
> txnid, TXN_OPEN)}} so it can read the txn type in the same SQL stmt.
> HiveOperation only has QUERY, which includes Insert and Select, so this 
> requires figuring out how to determine if a query is a SELECT.  By the time 
> {{Driver.openTransaction();}} is called, we have already parsed the query so 
> there should be a way to know if the statement only reads.
> For multi-stmt txns (once these are supported) we should allow user to 
> indicate that a txn is read-only and then not allow any statements that can 
> make modifications in this txn.  This should be a different jira.
> cc [~ikryvenko]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900975#comment-16900975
 ] 

Hive QA commented on HIVE-22040:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
30s{color} | {color:blue} standalone-metastore/metastore-common in master has 
31 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
6s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18270/dev-support/hive-personality.sh
 |
| git revision | master / 4510efd |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18270/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.03.patch, HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> E

[jira] [Commented] (HIVE-21543) Use FilterHooks for show compactions

2019-08-06 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900955#comment-16900955
 ] 

Peter Vary commented on HIVE-21543:
---

We should check whether the filterHooks is used with Ranger deployments or not

> Use FilterHooks for show compactions
> 
>
> Key: HIVE-21543
> URL: https://issues.apache.org/jira/browse/HIVE-21543
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-21543.01.patch
>
>
> Use FilterHooks for checking dbs/tables/partitions for showCompactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21847) Reduce the communication rate between TezAM and LlapDaemons for Llap statistics

2019-08-06 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900953#comment-16900953
 ] 

Peter Vary commented on HIVE-21847:
---

Opening a new communication channel between TezAMs to query the data from the 
leader TezAM could be problematic, also it can put extra load on the leader 
TezAM adding unpredictable extra latency for the queries running on it.

Possible solutions:
 * New component to collect the data from LLAP Daemons, and TezAMs get the data 
from this new component - same new communication channels should be opened as 
for the original suggestion, but it is much more clear why/when this is needed.
 * If we do not want to use the metrics data for other things than the health 
check - then only the leader TezAM should query the data and act on it. Other 
(non-leader) TezAMs should only make sure that there is at least 1 leader TezAM

> Reduce the communication rate between TezAM and LlapDaemons for Llap 
> statistics
> ---
>
> Key: HIVE-21847
> URL: https://issues.apache.org/jira/browse/HIVE-21847
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Peter Vary
>Priority: Major
>
> HIVE-21846 is suboptimal if we have big number of TezAMs and LlapDaemons.
> We have to find a better way which is scaleable.
> Possible solution could be:
>  * Elect a TezAM leader for statistics fetching using ZooKeeper for the 
> election
>  * TezAM should become a leader and fetch LlapDaemon data from the Daemons if 
> there is no existing leader
>  * TezAM should query the data from the leader TezAM if there is an existing 
> leader.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21740) Collect LLAP execution latency metrics

2019-08-06 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21740:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Found better solution to collect the metrics on the LLAP side

> Collect LLAP execution latency metrics
> --
>
> Key: HIVE-21740
> URL: https://issues.apache.org/jira/browse/HIVE-21740
> Project: Hive
>  Issue Type: New Feature
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21740.2.patch, HIVE-21740.3.patch, 
> HIVE-21740.4.patch, HIVE-21740.5.patch, HIVE-21740.6.patch, HIVE-21740.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Collect metrics for LLAP task execution times



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (HIVE-21906) Blacklist limping nodes

2019-08-06 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary resolved HIVE-21906.
---
Resolution: Fixed

> Blacklist limping nodes
> ---
>
> Key: HIVE-21906
> URL: https://issues.apache.org/jira/browse/HIVE-21906
> Project: Hive
>  Issue Type: New Feature
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>
> If a specific LLAP node is limping it can degrade the performance of the 
> whole cluster.
> We should find a way to identify the limping nodes and act on them either by 
> disabling the node altogether, or decreasing the load on this node



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900922#comment-16900922
 ] 

Hive QA commented on HIVE-22081:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976762/HIVE-21917.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16723 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18269/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18269/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18269/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976762 - PreCommit-HIVE-Build

> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am planning to do
> 1. passing less object to for loop by filtering out the objects based on the 
> condition which we are checking within the loop.
> 2. Doing Async call using future to determine compaction type(this is where 
> we do FileSystem calls)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22081) Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there are too many Table/partitions are eligible for compaction

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900893#comment-16900893
 ] 

Hive QA commented on HIVE-22081:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} serde in master has 193 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 12 new + 23 unchanged - 2 
fixed = 35 total (was 25) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18269/dev-support/hive-personality.sh
 |
| git revision | master / 4510efd |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18269/yetus/diff-checkstyle-ql.txt
 |
| modules | C: serde ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18269/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hivemetastore Performance: Compaction Initiator Thread overwhelmed if there 
> are too many Table/partitions are eligible for compaction 
> --
>
> Key: HIVE-22081
> URL: https://issues.apache.org/jira/browse/HIVE-22081
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21917.01.patch, HIVE-22081.patch
>
>
> if Automatic Compaction is turned on, Initiator thread check for potential 
> table/partitions which are eligible for compactions and run some checks in 
> for loop before requesting compaction for eligibles. Though initiator thread 
> is configured to run at interval 5 min default, in case of many objects it 
> keeps on running as these checks are IO intensive and hog cpu.
> In the proposed changes, I am pl

[jira] [Commented] (HIVE-22080) Prevent implicit conversion from String/char/varchar to double/decimal

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900860#comment-16900860
 ] 

Hive QA commented on HIVE-22080:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976754/HIVE-22080.2.patch

{color:green}SUCCESS:{color} +1 due to 29 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16723 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18268/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18268/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18268/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976754 - PreCommit-HIVE-Build

> Prevent implicit conversion from String/char/varchar to double/decimal
> --
>
> Key: HIVE-22080
> URL: https://issues.apache.org/jira/browse/HIVE-22080
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22080.1.patch, HIVE-22080.2.patch
>
>
> Implicit conversion from String family types to any non-string family types 
> are invalid. User can force the conversion by turning off the setting 
> hive.metastore.disallow.incompatible.col.type.changes. If not turned off, 
> such a conversion should throw error.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider

2019-08-06 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21910:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Decided on another solution - removing blacklisted hosts altogether from the 
decision matrix

> Multiple target location generation in HostAffinitySplitLocationProvider
> 
>
> Key: HIVE-21910
> URL: https://issues.apache.org/jira/browse/HIVE-21910
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21910.2.patch, HIVE-21910.3.patch, 
> HIVE-21910.4.patch, HIVE-21910.5.patch, HIVE-21910.6.patch, 
> HIVE-21910.7.patch, HIVE-21910.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We need to generate multiple target locations by 
> HostAffinitySplitLocationProvider, so we will have deterministic fallback 
> nodes in case the target node is disabled



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22080) Prevent implicit conversion from String/char/varchar to double/decimal

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900824#comment-16900824
 ] 

Hive QA commented on HIVE-22080:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
31s{color} | {color:blue} standalone-metastore/metastore-common in master has 
31 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} standalone-metastore/metastore-common: The patch 
generated 0 new + 70 unchanged - 1 fixed = 70 total (was 71) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18268/dev-support/hive-personality.sh
 |
| git revision | master / 4510efd |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18268/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Prevent implicit conversion from String/char/varchar to double/decimal
> --
>
> Key: HIVE-22080
> URL: https://issues.apache.org/jira/browse/HIVE-22080
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22080.1.patch, HIVE-22080.2.patch
>
>
> Implicit conversion from String family types to any non-string family types 
> are invalid. User can force the conversion by turning off the setting 
> hive.metastore.disallow.incompatible.col.type.changes. If not turned off, 
> such a conversion should throw error.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22086) Hive revoke the grant err by hive.security.authorization.createtable.role.grants ( SQL Standard Based Hive Authorization )

2019-08-06 Thread xinzhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinzhang updated HIVE-22086:

Description: 
1. Start hiveserver2

>/opt/hive/hive-bin/bin/hiveserver2 --hiveconf hive.server2.thrift.port=50033 
>--hiveconf hive.server2.webui.port=10003

2. create table

#/opt/hive/hive-bin/bin/beeline -u jdbc:hive2://172.31.10.119:50033 -n tools

>use tools;

>create table test1 as select * from tools.test99 limit 10;

>show grant on table tools.test1;

++-++-++--++---++--+
|database |    table|partition |column |principal_name |principal_type 
|privilege |grant_option |   grant_time  |grantor |

++-++-++--++---++--+
|tools|test1 |   ||da   |ROLE   |SELECT
|true |1565061852000 |tools   |

++-++-++--++---++--+

 

3. revoke select on role da

> set role damin;

> revoke select on table tools.test1 from role da;

4. err log

FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. Cannot find privilege Privilege 
[name=SELECT, columns=null] for Principal [name=da, type=ROLE] on Object 
[type=TABLE_OR_VIEW, name=tools.test1] granted by tools

 

  was:
# Start hiveserver2

>/opt/hive/hive-bin/bin/hiveserver2 --hiveconf hive.server2.thrift.port=50033 
>--hiveconf hive.server2.webui.port=10003
 # create table

#/opt/hive/hive-bin/bin/beeline -u jdbc:hive2://172.31.10.119:50033 -n tools

>use tools;

>create table test1 as select * from tools.test99 limit 10;

>show grant on table tools.test1;

+---+--++-+-+-++---++--+

| database  |    table | partition  | column  | principal_name  | 
principal_type  | privilege  | grant_option  |   grant_time   | grantor  |

+---+--++-+-+-++---++--+

| tools | test1  |    | | da    | ROLE    | 
SELECT | true  | 1565061852000  | tools    |

+---+--++-+-+-++---++--+

 
 # revoke select on role da

> set role damin;

> revoke select on table tools.test1 from role da;
 # err log

FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. Cannot find privilege Privilege 
[name=SELECT, columns=null] for Principal [name=da, type=ROLE] on Object 
[type=TABLE_OR_VIEW, name=tools.test1] granted by tools

 


> Hive revoke the grant err by 
> hive.security.authorization.createtable.role.grants  ( SQL Standard Based 
> Hive Authorization )
> ---
>
> Key: HIVE-22086
> URL: https://issues.apache.org/jira/browse/HIVE-22086
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, Beeline, HiveServer2
>Affects Versions: 2.3.5
> Environment: host 172.31.10.119
> port 50033
> version apache-hive-2.3.5-bin
> database tools
> hive-site.xml
> 
>  hive.security.authorization.createtable.role.grants 
>  da:select;
> 
> 
>  hive.users.in.admin.role 
>  root,tools 
> 
>Reporter: xinzhang
>Priority: Major
>
> 1. Start hiveserver2
> >/opt/hive/hive-bin/bin/hiveserver2 --hiveconf hive.server2.thrift.port=50033 
> >--hiveconf hive.server2.webui.port=10003
> 2. create table
> #/opt/hive/hive-bin/bin/beeline -u jdbc:hive2://172.31.10.119:50033 -n tools
> >use tools;
> >create table test1 as select * from tools.test99 limit 10;
> >show grant on table tools.test1;
> ++-++-++--++---++--+
> |database |    table|partition |column |principal_name |principal_type 
> |privilege |grant_option |   grant_time  |grantor |
> ++-++-++--++---++--+
> |tools|test1 |   ||da   |ROLE   |SELECT
> |true |1565061852000 |tools   |
> ++-++-++--++---++--+
>  
> 3. revoke select on role da
> > set role damin;
> > revoke s

[jira] [Commented] (HIVE-21241) Migrate TimeStamp Parser From Joda Time

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900780#comment-16900780
 ] 

Hive QA commented on HIVE-21241:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976752/HIVE-21241.5.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16731 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18267/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18267/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18267/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976752 - PreCommit-HIVE-Build

> Migrate TimeStamp Parser From Joda Time
> ---
>
> Key: HIVE-21241
> URL: https://issues.apache.org/jira/browse/HIVE-21241
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21241.1.patch, HIVE-21241.2.patch, 
> HIVE-21241.3.patch, HIVE-21241.4.patch, HIVE-21241.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive uses Joda time for its TimeStampParser.
> {quote}
> Joda-Time is the de facto standard date and time library for Java prior to 
> Java SE 8. Users are now asked to migrate to java.time (JSR-310).
> https://www.joda.org/joda-time/
> {quote}
> Migrate TimeStampParser to {{java.time}}
> I also added a couple new pre-canned timestamp parsers for convenience:
> * ISO 8601
> * RFC 1123



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21875) Implement drop partition related methods on temporary tables

2019-08-06 Thread Laszlo Pinter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-21875:
-
Attachment: HIVE-21875.02.patch

> Implement drop partition related methods on temporary tables
> 
>
> Key: HIVE-21875
> URL: https://issues.apache.org/jira/browse/HIVE-21875
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-21875.01.patch, HIVE-21875.02.patch
>
>
> IMetaStoreClient exposes the following methods related to dropping partitions:
> {code:java}
> boolean dropPartition(String db_name, String tbl_name, List 
> part_vals, boolean deleteData);
> boolean dropPartition(String catName, String db_name, String tbl_name, 
> List part_vals, boolean deleteData);
> boolean dropPartition(String db_name, String tbl_name, List 
> part_vals, PartitionDropOptions options);
> boolean dropPartition(String catName, String db_name, String tbl_name, 
> List part_vals, PartitionDropOptions options);
> List dropPartitions(String dbName, String tblName, 
> List> partExprs, boolean deleteData, boolean 
> ifExists);
> List dropPartitions(String catName, String dbName, String tblName, 
> List> partExprs, boolean deleteData, boolean 
> ifExists);
> List dropPartitions(String dbName, String tblName, 
> List> partExprs, boolean deleteData, boolean 
> ifExists, boolean needResults);
> List dropPartitions(String catName, String dbName, String tblName, 
> List> partExprs, boolean deleteData, boolean 
> ifExists, boolean needResults);
> List dropPartitions(String dbName, String tblName, 
> List> partExprs, PartitionDropOptions options);
> List dropPartitions(String catName, String dbName, String tblName, 
> List> partExprs, PartitionDropOptions options);
> boolean dropPartition(String db_name, String tbl_name, String name, boolean 
> deleteData);
> boolean dropPartition(String catName, String db_name, String tbl_name, String 
> name, boolean deleteData){code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21241) Migrate TimeStamp Parser From Joda Time

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900725#comment-16900725
 ] 

Hive QA commented on HIVE-21241:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} serde in master has 193 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} common: The patch generated 0 new + 0 unchanged - 14 
fixed = 0 total (was 14) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} The patch serde passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} common generated 0 new + 61 unchanged - 1 fixed = 61 
total (was 62) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} serde in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} common generated 0 new + 26 unchanged - 1 fixed = 26 
total (was 27) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} serde in the patch passed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18267/dev-support/hive-personality.sh
 |
| git revision | master / 4510efd |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common serde U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18267/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Migrate TimeStamp Parser From Joda Time
> ---
>
> Key: HIVE-21241
> URL: https://issues.apache.org/jira/browse/HIVE-21241
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21241.1.patch, HIVE-21241.2.patch, 
> HIVE-21241.3.patch, HIVE-21241.4.patch, HIVE-21241.5.patch
>
>  Ti

[jira] [Commented] (HIVE-16587) NPE when inserting complex types with nested null values

2019-08-06 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900717#comment-16900717
 ] 

Sankar Hariappan commented on HIVE-16587:
-

[~nareshpr]
I think, just woi.getPrimitiveWritableObject(value).getLength() should be 
evaluated to 0 if value=null instead of 
JavaDataModel.get().lengthForByteArrayOfSize(
 woi.getPrimitiveWritableObject(value).getLength())

> NPE when inserting complex types with nested null values
> 
>
> Key: HIVE-16587
> URL: https://issues.apache.org/jira/browse/HIVE-16587
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 1.2.1
>Reporter: Jason Dere
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-16587.patch
>
>
> {noformat}
> CREATE TABLE complex1 (c0 int, c1 array, c2 map, c3 
> struct>, c4 array f3:array>>)
> insert into complex1
>  select 3, array(1, 2, null), map(1, 'one', 2, null), named_struct('f1', 
> cast(null as int), 'f2', cast(null as string), 'f3', array(1,2,null)), 
> array(named_struct('f1', 11, 'f2', 'two', 'f3', array(2,3,4)))
> {noformat}
> Gives the following error:
> {noformat}
> Caused by: org.apache.hive.service.cli.HiveSQLException: Error while 
> compiling statement: FAILED: NullPointerException null
>   at 
> org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:207)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:291)
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:255)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:531)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:517)
>   at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
>   at 
> org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
>   at com.sun.proxy.$Proxy126.executeStatementAsync(Unknown Source)
>   at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:310)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:530)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1437)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1422)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getWritableSize(StatsUtils.java:1144)
>   at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1106)
>   at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:978)
>   at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOf(StatsUtils.java:916)
>   at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getColStatisticsFromExpression(StatsUtils.java:1371)
>   at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getColStatisticsFromExprMap(StatsUtils.java:1194)
>   at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$SelectStatsRule.process(StatsRulesProcFactory.java:187)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:9

[jira] [Commented] (HIVE-22074) Slow compilation due to IN to OR transformation

2019-08-06 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16900708#comment-16900708
 ] 

Hive QA commented on HIVE-22074:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12976750/HIVE-22074.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16691 tests 
executed
*Failed tests:*
{noformat}
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=232)
TestObjectStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18266/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18266/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18266/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12976750 - PreCommit-HIVE-Build

> Slow compilation due to IN to OR transformation
> ---
>
> Key: HIVE-22074
> URL: https://issues.apache.org/jira/browse/HIVE-22074
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-22074.1.patch, HIVE-22074.2.patch, 
> HIVE-22074.3.patch
>
>
> Currently Hive transform IN expressions to OR to apply various CBO rules. 
> This incur significant performance hit if IN consist of large number of 
> expressions. 
> It is better to not transform IN expressions to OR in such cases because 
> overall benefit of various optimizations/transformations is unrealized due to 
> the compilation overhead



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22083) Values of tag order cannot be null, so it can be "byte" instead of "Byte"

2019-08-06 Thread Ivan Suller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller updated HIVE-22083:
---
Status: Patch Available  (was: In Progress)

> Values of tag order cannot be null, so it can be "byte" instead of "Byte"
> -
>
> Key: HIVE-22083
> URL: https://issues.apache.org/jira/browse/HIVE-22083
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Minor
> Attachments: HIVE-22083.1.patch
>
>
> Values of tag order cannot be null, so it can be "byte" instead of "Byte". 
> Switching between Byte and byte is "cheap" - the Byte objects are cached by 
> the JVM - but it still costs a bit more memory and CPU usage.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)