[jira] [Commented] (HIVE-21189) hive.merge.nway.joins should default to false

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756982#comment-16756982
 ] 

Hive QA commented on HIVE-21189:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15858/dev-support/hive-personality.sh
 |
| git revision | master / 4271bbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15858/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> hive.merge.nway.joins should default to false
> -
>
> Key: HIVE-21189
> URL: https://issues.apache.org/jira/browse/HIVE-21189
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-21189.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756964#comment-16756964
 ] 

Hive QA commented on HIVE-17938:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956982/HIVE-17938.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15718 tests 
executed
*Failed tests:*
{noformat}
TestReplicationScenariosIncrementalLoadAcidTables - did not produce a 
TEST-*.xml file (likely timed out) (batchId=251)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15857/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15857/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15857/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956982 - PreCommit-HIVE-Build

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19161) Add authorizations to information schema

2019-01-30 Thread Song Jun (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756945#comment-16756945
 ] 

Song Jun commented on HIVE-19161:
-

[~daijy] After this patch, ranger will not work with hive

No method getHivePolicyProvider in RangerHiveAuthorizer.

{code:java}
java.lang.AbstractMethodError: 
org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizer.getHivePolicyProvider()Lorg/apache/hadoop/hive/ql/security/authorization/plugin/HivePolicyProvider;
at 
org.apache.hive.service.server.HiveServer2.startPrivilegeSynchonizer(HiveServer2.java:985)
 ~[hive-service-3.1.1.jar:3.1.1]
at 
org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:726) 
~[hive-service-3.1.1.jar:3.1.1]
at 
org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1037)
 [hive-service-3.1.1.jar:3.1.1]
at 
org.apache.hive.service.server.HiveServer2.access$1600(HiveServer2.java:140) 
[hive-service-3.1.1.jar:3.1.1]
at 
org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:1305)
 [hive-service-3.1.1.jar:3.1.1]
at 
org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:1149) 
[hive-service-3.1.1.jar:3.1.1]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_151]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_151]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_151]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_151]
at org.apache.hadoop.util.RunJar.run(RunJar.java:221) 
[hadoop-common-2.7.2.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:136) 
[hadoop-common-2.7.2.jar:?]
{code}


> Add authorizations to information schema
> 
>
> Key: HIVE-19161
> URL: https://issues.apache.org/jira/browse/HIVE-19161
> Project: Hive
>  Issue Type: Improvement
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19161.1.patch, HIVE-19161.10.patch, 
> HIVE-19161.11.patch, HIVE-19161.12.patch, HIVE-19161.13.patch, 
> HIVE-19161.14.patch, HIVE-19161.15.patch, HIVE-19161.2.patch, 
> HIVE-19161.3.patch, HIVE-19161.4.patch, HIVE-19161.5.patch, 
> HIVE-19161.6.patch, HIVE-19161.7.patch, HIVE-19161.8.patch, HIVE-19161.9.patch
>
>
> We need to control the access of information schema so user can only query 
> the information authorized to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21188) SemanticException for query on view with masked table

2019-01-30 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21188:
---
Attachment: HIVE-21188.01.patch

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  



# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 



# if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!


  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  

# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 

if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!



> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> 
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver come into an available status!  
> 
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> 
> # if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# caused by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(String sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an unavailable status!  



# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 



# if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!


  was:
# caused by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(StringsleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an unavailable status!  



# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 



# if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!



> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # caused by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(String sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> 
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver come into an unavailable status!  
> 
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> 
> # if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# caused by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(StringsleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an unavailable status!  



# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 



# if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!


  was:
# caused by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an unavailable status!  



# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 



# if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!



> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # caused by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(StringsleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> 
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver come into an unavailable status!  
> 
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> 
> # if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# caused by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an unavailable status!  



# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 



# if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!


  was:
# caused by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  



# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 



# if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!



> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # caused by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> 
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver come into an unavailable status!  
> 
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> 
> # if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# caused by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  



# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 



# if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!


  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  



# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 



# if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!



> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # caused by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> 
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver come into an available status!  
> 
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> 
> # if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Summary: hive thrift server may be blocked by session level waiting,caused 
by udf!  (was: hive thrift server may be blocked by session level waiting!)

> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: kongxianghe
>Priority: Critical
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver can into an available status!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  
===
# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 

===
if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!


  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}

===
# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  
===
# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 

===
if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!



> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> 
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver come into an available status!  
> ===
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> ===
> if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  

# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 

if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!


  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}



# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  
===
# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 

===
if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!



> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> 
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver come into an available status!  
> 
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> 
> if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}

===
# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver come into an available status!  
===
# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 

===
if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!


  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}

===
# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver can into an available status!  
===
# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 

===
if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!



> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> ===
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver come into an available status!  
> ===
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> ===
> if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}

===
# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver can into an available status!  
===
# session_1 run waiting 200s,
 !session_1.jpg! 
# session_2 run at the same time ,but blocked by session_1 , see the 
pic,waiting 197s after session_1 returned then returned
 !session_2.jpg! 

===
if someone want to attack or do sth ,hiveserver will not be down,but not 
available again!


  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver can into an available status!  
 !session_1.jpg! 

 !session_2.jpg! 


> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> ===
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver can into an available status!  
> ===
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> ===
> if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Attachment: session_1.jpg
session_2.jpg

> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver can into an available status!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver can into an available status!  
 !session_1.jpg! 

 !session_2.jpg! 

  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver can into an available status!  


> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver can into an available status!  
>  !session_1.jpg! 
>  !session_2.jpg! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe reassigned HIVE-21190:
--

Assignee: Josh Elser  (was: kongxianghe)

> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: Josh Elser
>Priority: Critical
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver can into an available status!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver can into an available status!  

  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver become to a not available status!  


> hive thrift server may be blocked by session level waiting!
> ---
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: kongxianghe
>Priority: Critical
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver can into an available status!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{coed}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver become to a not available status!  

  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
##  select time_waiting(100);
# in session_2:
## select 1;  or show tables;
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver become to a not available status!  


> hive thrift server may be blocked by session level waiting!
> ---
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: kongxianghe
>Priority: Critical
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {coed}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver become to a not available status!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{code}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver become to a not available status!  

  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
{code}select time_waiting(100);{code}
# in session_2:
{coed}select 1;  or show tables;{code}
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver become to a not available status!  


> hive thrift server may be blocked by session level waiting!
> ---
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: kongxianghe
>Priority: Critical
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver become to a not available status!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21190) hive thrift server may be blocked by session level waiting!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe updated HIVE-21190:
---
Description: 
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


# in session_1:
##  select time_waiting(100);
# in session_2:
## select 1;  or show tables;
# session_2 will not have any response from thrift server util  session_1  
waiting 100 seconds!

this bug may cause hiveserver become to a not available status!  

  was:
# cause by an error UDF function!time_waiting(Long sleepSeconds)
{code}
public class UDFTimeWaiting extends UDF throws Exception{
  public String evaluate(Long sleepSeconds){
 ...
 Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
 return "ok";
 }
}
{code}


> hive thrift server may be blocked by session level waiting!
> ---
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: kongxianghe
>Priority: Critical
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> # in session_1:
> ##  select time_waiting(100);
> # in session_2:
> ## select 1;  or show tables;
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver become to a not available status!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756917#comment-16756917
 ] 

Hive QA commented on HIVE-17938:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} common: The patch generated 1 new + 427 unchanged - 1 
fixed = 428 total (was 428) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15857/dev-support/hive-personality.sh
 |
| git revision | master / 4271bbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15857/yetus/diff-checkstyle-common.txt
 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15857/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21190) hive thrift server may be blocked by session level waiting!

2019-01-30 Thread kongxianghe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kongxianghe reassigned HIVE-21190:
--


> hive thrift server may be blocked by session level waiting!
> ---
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Assignee: kongxianghe
>Priority: Critical
>
> # cause by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(Long sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756904#comment-16756904
 ] 

Hive QA commented on HIVE-21188:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956977/HIVE-21188.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 15719 tests 
executed
*Failed tests:*
{noformat}
TestReplicationScenariosIncrementalLoadAcidTables - did not produce a 
TEST-*.xml file (likely timed out) (batchId=251)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=275)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view_partitioned] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[escape_comments] 
(batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_ppr] (batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_1] (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_1] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppr_pushdown3] 
(batchId=30)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[unicode_comments] 
(batchId=42)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_partitioned]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_partitioned_3]
 (batchId=176)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15856/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15856/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15856/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956977 - PreCommit-HIVE-Build

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756880#comment-16756880
 ] 

Hive QA commented on HIVE-21188:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
34s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15856/dev-support/hive-personality.sh
 |
| git revision | master / 4271bbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15856/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21029) External table replication for existing deployments running incremental replication.

2019-01-30 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21029:

Status: Patch Available  (was: Open)

Attached 04.patch with fix for review comments from Mahesh.

> External table replication for existing deployments running incremental 
> replication.
> 
>
> Key: HIVE-21029
> URL: https://issues.apache.org/jira/browse/HIVE-21029
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.1.1, 3.1.0, 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21029.01.patch, HIVE-21029.02.patch, 
> HIVE-21029.03.patch, HIVE-21029.04.patch
>
>
> Existing deployments using hive replication do not get external tables 
> replicated. For such deployments to enable external table replication they 
> will have to provide a specific switch to first bootstrap external tables as 
> part of hive incremental replication, following which the incremental 
> replication will take care of further changes in external tables.
> The switch will be provided by an additional hive configuration (for ex: 
> hive.repl.bootstrap.external.tables) and is to be used in 
> {code} WITH {code}  clause of 
> {code} REPL DUMP {code} command. 
> Additionally the existing hive config _hive.repl.include.external.tables_  
> will always have to be set to "true" in the above clause. 
> Proposed usage for enabling external tables replication on existing 
> replication policy.
> 1. Consider an ongoing repl policy  in incremental phase.
> Enable hive.repl.include.external.tables=true and 
> hive.repl.bootstrap.external.tables=true in next incremental REPL DUMP.
> - Dumps all events but skips events related to external tables.
> - Instead, combine bootstrap dump for all external tables under “_bootstrap” 
> directory.
> - Also, includes the data locations file "_external_tables_info”.
> - LIMIT or TO clause shouldn’t be there to ensure the latest events are 
> dumped before bootstrap dumping external tables.
> 2. REPL LOAD on this dump applies all the events first, copies external 
> tables data and then bootstrap external tables (metadata).
> - It is possible that the external tables (metadata) are not point-in time 
> consistent with rest of the tables.
> - But, it would be eventually consistent when the next incremental load is 
> applied.
> - This REPL LOAD is fault tolerant and can be retried if failed.
> 3. All future REPL DUMPs on this repl policy should set 
> hive.repl.bootstrap.external.tables=false.
> - If not set to false, then target might end up having inconsistent set of 
> external tables as bootstrap wouldn’t clean-up any dropped external tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21182) Skip setting up hive scratch dir during planning

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756863#comment-16756863
 ] 

Hive QA commented on HIVE-21182:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956960/HIVE-21182.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15720 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15855/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15855/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15855/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956960 - PreCommit-HIVE-Build

> Skip setting up hive scratch dir during planning
> 
>
> Key: HIVE-21182
> URL: https://issues.apache.org/jira/browse/HIVE-21182
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21182.1.patch, HIVE-21182.2.patch
>
>
> During metadata gathering phase hive creates staging/scratch dir which is 
> further used by FS op (FS op sets up staging dir within this dir for tasks to 
> write to).
> Since FS op do mkdirs to setup staging dir we can skip creating scratch dir 
> during metadata gathering phase. FS op will take care of setting up all the 
> dirs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21029) External table replication for existing deployments running incremental replication.

2019-01-30 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21029:

Status: Open  (was: Patch Available)

> External table replication for existing deployments running incremental 
> replication.
> 
>
> Key: HIVE-21029
> URL: https://issues.apache.org/jira/browse/HIVE-21029
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.1.1, 3.1.0, 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21029.01.patch, HIVE-21029.02.patch, 
> HIVE-21029.03.patch, HIVE-21029.04.patch
>
>
> Existing deployments using hive replication do not get external tables 
> replicated. For such deployments to enable external table replication they 
> will have to provide a specific switch to first bootstrap external tables as 
> part of hive incremental replication, following which the incremental 
> replication will take care of further changes in external tables.
> The switch will be provided by an additional hive configuration (for ex: 
> hive.repl.bootstrap.external.tables) and is to be used in 
> {code} WITH {code}  clause of 
> {code} REPL DUMP {code} command. 
> Additionally the existing hive config _hive.repl.include.external.tables_  
> will always have to be set to "true" in the above clause. 
> Proposed usage for enabling external tables replication on existing 
> replication policy.
> 1. Consider an ongoing repl policy  in incremental phase.
> Enable hive.repl.include.external.tables=true and 
> hive.repl.bootstrap.external.tables=true in next incremental REPL DUMP.
> - Dumps all events but skips events related to external tables.
> - Instead, combine bootstrap dump for all external tables under “_bootstrap” 
> directory.
> - Also, includes the data locations file "_external_tables_info”.
> - LIMIT or TO clause shouldn’t be there to ensure the latest events are 
> dumped before bootstrap dumping external tables.
> 2. REPL LOAD on this dump applies all the events first, copies external 
> tables data and then bootstrap external tables (metadata).
> - It is possible that the external tables (metadata) are not point-in time 
> consistent with rest of the tables.
> - But, it would be eventually consistent when the next incremental load is 
> applied.
> - This REPL LOAD is fault tolerant and can be retried if failed.
> 3. All future REPL DUMPs on this repl policy should set 
> hive.repl.bootstrap.external.tables=false.
> - If not set to false, then target might end up having inconsistent set of 
> external tables as bootstrap wouldn’t clean-up any dropped external tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21029) External table replication for existing deployments running incremental replication.

2019-01-30 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21029:

Attachment: HIVE-21029.04.patch

> External table replication for existing deployments running incremental 
> replication.
> 
>
> Key: HIVE-21029
> URL: https://issues.apache.org/jira/browse/HIVE-21029
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.0.0, 3.1.0, 3.1.1
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21029.01.patch, HIVE-21029.02.patch, 
> HIVE-21029.03.patch, HIVE-21029.04.patch
>
>
> Existing deployments using hive replication do not get external tables 
> replicated. For such deployments to enable external table replication they 
> will have to provide a specific switch to first bootstrap external tables as 
> part of hive incremental replication, following which the incremental 
> replication will take care of further changes in external tables.
> The switch will be provided by an additional hive configuration (for ex: 
> hive.repl.bootstrap.external.tables) and is to be used in 
> {code} WITH {code}  clause of 
> {code} REPL DUMP {code} command. 
> Additionally the existing hive config _hive.repl.include.external.tables_  
> will always have to be set to "true" in the above clause. 
> Proposed usage for enabling external tables replication on existing 
> replication policy.
> 1. Consider an ongoing repl policy  in incremental phase.
> Enable hive.repl.include.external.tables=true and 
> hive.repl.bootstrap.external.tables=true in next incremental REPL DUMP.
> - Dumps all events but skips events related to external tables.
> - Instead, combine bootstrap dump for all external tables under “_bootstrap” 
> directory.
> - Also, includes the data locations file "_external_tables_info”.
> - LIMIT or TO clause shouldn’t be there to ensure the latest events are 
> dumped before bootstrap dumping external tables.
> 2. REPL LOAD on this dump applies all the events first, copies external 
> tables data and then bootstrap external tables (metadata).
> - It is possible that the external tables (metadata) are not point-in time 
> consistent with rest of the tables.
> - But, it would be eventually consistent when the next incremental load is 
> applied.
> - This REPL LOAD is fault tolerant and can be retried if failed.
> 3. All future REPL DUMPs on this repl policy should set 
> hive.repl.bootstrap.external.tables=false.
> - If not set to false, then target might end up having inconsistent set of 
> external tables as bootstrap wouldn’t clean-up any dropped external tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21186) External tables replication throws NPE if hive.repl.replica.external.table.base.dir is not fully qualified HDFS path.

2019-01-30 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21186:

Fix Version/s: 4.0.0

> External tables replication throws NPE if 
> hive.repl.replica.external.table.base.dir is not fully qualified HDFS path.
> -
>
> Key: HIVE-21186
> URL: https://issues.apache.org/jira/browse/HIVE-21186
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Fix For: 4.0.0
>
> Attachments: HIVE-21186.01.patch
>
>
> REPL DUMP is fine. Load seems to be throwing exception:
> {code}
> 2019-01-29 09:25:12,671 ERROR HiveServer2-Background-Pool: Thread-4864: 
> ql.Driver (SessionState.java:printError(1129)) - FAILED: Execution Error, 
> return code 4 from org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask. 
> java.lang.NullPointerException
> 2019-01-29 09:25:12,671 INFO HiveServer2-Background-Pool: Thread-4864: 
> ql.Driver (Driver.java:execute(1661)) - task failed with
> org.apache.hadoop.hive.ql.parse.SemanticException: 
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.tasks(LoadTable.java:154)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask.executeBootStrapLoad(ReplLoadTask.java:141)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask.execute(ReplLoadTask.java:82)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:177)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:93)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1777)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1511)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1308)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1175)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1170)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:197)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:76)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:255)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:273)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.util.PathUtils.getExternalTmpPath(PathUtils.java:35)
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.loadTableTask(LoadTable.java:245)
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.newTableTasks(LoadTable.java:189)
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.tasks(LoadTable.java:136)
> ... 23 more
> {code}
> REPL Load statement: 
> {code}
> REPL LOAD `testdb1_tgt` FROM 
> 'hdfs://ctr-e139-1542663976389-56533-01-11.hwx.site:8020/apps/hive/repl/c9476207-8179-4db7-b947-ba67c950a340'
>  WITH 
> ('hive.query.id'='testHive1_3dd5e281-89ef-4054-850e-8a34386fc2c8','hive.exec.parallel'='true','hive.repl.replica.external.table.base.dir'='/tmp/someNewloc/','hive.repl.include.external.tables'='true','mapreduce.map.java.opts'='-Xmx640m','hive.distcp.privileged.doAs'='beacon','distcp.options.pugpb'='')
> {code}
> This is an issue with Hive unable to handle path without schema/authority 
> input for "hive.repl.replica.external.table.base.dir".
> Here the input was 
> 'hive.repl.replica.external.table.base.dir'='/tmp/someNewloc/','.
> If we set a fully qualified HDFS path (such as 
> hdfs://: Need to fix it in Hive to accept path without schema/authority and obtain it 
> from local cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21186) External tables replication throws NPE if hive.repl.replica.external.table.base.dir is not fully qualified HDFS path.

2019-01-30 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21186:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

01.patch committed to master.
Thanks [~maheshk114] for the review!

> External tables replication throws NPE if 
> hive.repl.replica.external.table.base.dir is not fully qualified HDFS path.
> -
>
> Key: HIVE-21186
> URL: https://issues.apache.org/jira/browse/HIVE-21186
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Attachments: HIVE-21186.01.patch
>
>
> REPL DUMP is fine. Load seems to be throwing exception:
> {code}
> 2019-01-29 09:25:12,671 ERROR HiveServer2-Background-Pool: Thread-4864: 
> ql.Driver (SessionState.java:printError(1129)) - FAILED: Execution Error, 
> return code 4 from org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask. 
> java.lang.NullPointerException
> 2019-01-29 09:25:12,671 INFO HiveServer2-Background-Pool: Thread-4864: 
> ql.Driver (Driver.java:execute(1661)) - task failed with
> org.apache.hadoop.hive.ql.parse.SemanticException: 
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.tasks(LoadTable.java:154)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask.executeBootStrapLoad(ReplLoadTask.java:141)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask.execute(ReplLoadTask.java:82)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:177)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:93)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1777)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1511)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1308)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1175)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1170)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:197)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:76)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:255)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:273)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.util.PathUtils.getExternalTmpPath(PathUtils.java:35)
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.loadTableTask(LoadTable.java:245)
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.newTableTasks(LoadTable.java:189)
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.tasks(LoadTable.java:136)
> ... 23 more
> {code}
> REPL Load statement: 
> {code}
> REPL LOAD `testdb1_tgt` FROM 
> 'hdfs://ctr-e139-1542663976389-56533-01-11.hwx.site:8020/apps/hive/repl/c9476207-8179-4db7-b947-ba67c950a340'
>  WITH 
> ('hive.query.id'='testHive1_3dd5e281-89ef-4054-850e-8a34386fc2c8','hive.exec.parallel'='true','hive.repl.replica.external.table.base.dir'='/tmp/someNewloc/','hive.repl.include.external.tables'='true','mapreduce.map.java.opts'='-Xmx640m','hive.distcp.privileged.doAs'='beacon','distcp.options.pugpb'='')
> {code}
> This is an issue with Hive unable to handle path without schema/authority 
> input for "hive.repl.replica.external.table.base.dir".
> Here the input was 
> 'hive.repl.replica.external.table.base.dir'='/tmp/someNewloc/','.
> If we set a fully qualified HDFS path (such as 
> hdfs://: Need to fix it in Hive to accept path without schema/authority and obtain it 
> from local cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21182) Skip setting up hive scratch dir during planning

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756838#comment-16756838
 ] 

Hive QA commented on HIVE-21182:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
37s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15855/dev-support/hive-personality.sh
 |
| git revision | master / dfc4b8e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15855/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Skip setting up hive scratch dir during planning
> 
>
> Key: HIVE-21182
> URL: https://issues.apache.org/jira/browse/HIVE-21182
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21182.1.patch, HIVE-21182.2.patch
>
>
> During metadata gathering phase hive creates staging/scratch dir which is 
> further used by FS op (FS op sets up staging dir within this dir for tasks to 
> write to).
> Since FS op do mkdirs to setup staging dir we can skip creating scratch dir 
> during metadata gathering phase. FS op will take care of setting up all the 
> dirs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21186) External tables replication throws NPE if hive.repl.replica.external.table.base.dir is not fully qualified HDFS path.

2019-01-30 Thread mahesh kumar behera (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756833#comment-16756833
 ] 

mahesh kumar behera commented on HIVE-21186:


01.patch looks fine to me. 

+1

> External tables replication throws NPE if 
> hive.repl.replica.external.table.base.dir is not fully qualified HDFS path.
> -
>
> Key: HIVE-21186
> URL: https://issues.apache.org/jira/browse/HIVE-21186
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Attachments: HIVE-21186.01.patch
>
>
> REPL DUMP is fine. Load seems to be throwing exception:
> {code}
> 2019-01-29 09:25:12,671 ERROR HiveServer2-Background-Pool: Thread-4864: 
> ql.Driver (SessionState.java:printError(1129)) - FAILED: Execution Error, 
> return code 4 from org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask. 
> java.lang.NullPointerException
> 2019-01-29 09:25:12,671 INFO HiveServer2-Background-Pool: Thread-4864: 
> ql.Driver (Driver.java:execute(1661)) - task failed with
> org.apache.hadoop.hive.ql.parse.SemanticException: 
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.tasks(LoadTable.java:154)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask.executeBootStrapLoad(ReplLoadTask.java:141)
> at 
> org.apache.hadoop.hive.ql.exec.repl.ReplLoadTask.execute(ReplLoadTask.java:82)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:177)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:93)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1777)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1511)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1308)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1175)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1170)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:197)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:76)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:255)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:273)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.util.PathUtils.getExternalTmpPath(PathUtils.java:35)
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.loadTableTask(LoadTable.java:245)
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.newTableTasks(LoadTable.java:189)
> at 
> org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.table.LoadTable.tasks(LoadTable.java:136)
> ... 23 more
> {code}
> REPL Load statement: 
> {code}
> REPL LOAD `testdb1_tgt` FROM 
> 'hdfs://ctr-e139-1542663976389-56533-01-11.hwx.site:8020/apps/hive/repl/c9476207-8179-4db7-b947-ba67c950a340'
>  WITH 
> ('hive.query.id'='testHive1_3dd5e281-89ef-4054-850e-8a34386fc2c8','hive.exec.parallel'='true','hive.repl.replica.external.table.base.dir'='/tmp/someNewloc/','hive.repl.include.external.tables'='true','mapreduce.map.java.opts'='-Xmx640m','hive.distcp.privileged.doAs'='beacon','distcp.options.pugpb'='')
> {code}
> This is an issue with Hive unable to handle path without schema/authority 
> input for "hive.repl.replica.external.table.base.dir".
> Here the input was 
> 'hive.repl.replica.external.table.base.dir'='/tmp/someNewloc/','.
> If we set a fully qualified HDFS path (such as 
> hdfs://: Need to fix it in Hive to accept path without schema/authority and obtain it 
> from local cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21045) Add HMS total api count stats and connection pool stats to metrics

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756829#comment-16756829
 ] 

Hive QA commented on HIVE-21045:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956949/HIVE-21045.2.branch-3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 132 failed/errored test(s), 14471 tests 
executed
*Failed tests:*
{noformat}
TestAddPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestAddPartitionsFromPartSpec - did not produce a TEST-*.xml file (likely timed 
out) (batchId=230)
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestAggregateStatsCache - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestAlterPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestAppendPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=274)
TestCachedStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestCatalogCaching - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestCatalogNonDefaultClient - did not produce a TEST-*.xml file (likely timed 
out) (batchId=228)
TestCatalogNonDefaultSvr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestCatalogOldClient - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestCatalogs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestCheckConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=238)
TestDatabases - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDeadline - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestDefaultConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDropPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=274)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=231)
TestExchangePartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestFMSketchSerialization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=239)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestForeignKey - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestFunctions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestGetPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestGetPartitionsUsingProjectionAndFilterSpecs - did not produce a TEST-*.xml 
file (likely timed out) (batchId=230)
TestGetTableMeta - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestHLLNoBias - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHLLSerialization - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHdfsUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestHiveAlterHandler - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestHiveMetaStoreGetMetaConf - did not produce a TEST-*.xml file (likely timed 
out) (batchId=238)
TestHiveMetaStorePartitionSpecs - did not produce a TEST-*.xml file (likely 
timed out) (batchId=230)
TestHiveMetaStoreSchemaMethods - did not produce a TEST-*.xml file (likely 
timed out) (batchId=235)
TestHiveMetaStoreTimeout - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=233)
TestHiveMetaToolCommandLine - did not produce a TEST-*.xml file (likely timed 
out) (batchId=235)
TestHiveMetastoreCli - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestHyperLogLog - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHyperLogLogDense - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHyperLogLogMerge - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHyperLogLogSparse - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestJSONMessageDeserializer - did not produce a TEST-*.xml file (likely timed 
out) (batchId=235)
TestListPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestLockRequestBuilder - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestMarkPartition - d

[jira] [Updated] (HIVE-20255) Review LevelOrderWalker.java

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20255:
---
Status: Patch Available  (was: Open)

{code}
TestReplicationScenariosIncrementalLoadAcidTables
{code}

... unit test keeps failing, but I've seen it failing in other JIRAs too.

> Review LevelOrderWalker.java
> 
>
> Key: HIVE-20255
> URL: https://issues.apache.org/jira/browse/HIVE-20255
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20255.10.patch, HIVE-20255.11.patch, 
> HIVE-20255.12.patch, HIVE-20255.13.patch, HIVE-20255.14.patch, 
> HIVE-20255.15.patch, HIVE-20255.16.patch, HIVE-20255.17.patch, 
> HIVE-20255.18.patch, HIVE-20255.9.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lib/LevelOrderWalker.java
> * Make code more concise
> * Fix some check style issues
> {code}
>   if (toWalk.get(index).getChildren() != null) {
> for(Node child : toWalk.get(index).getChildren()) {
> {code}
> Actually, the underlying implementation of {{getChildren()}} has to do some 
> real work, so do not throw away the work after checking for null.  Simply 
> call once and store the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20255) Review LevelOrderWalker.java

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20255:
---
Attachment: HIVE-20255.18.patch

> Review LevelOrderWalker.java
> 
>
> Key: HIVE-20255
> URL: https://issues.apache.org/jira/browse/HIVE-20255
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20255.10.patch, HIVE-20255.11.patch, 
> HIVE-20255.12.patch, HIVE-20255.13.patch, HIVE-20255.14.patch, 
> HIVE-20255.15.patch, HIVE-20255.16.patch, HIVE-20255.17.patch, 
> HIVE-20255.18.patch, HIVE-20255.9.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lib/LevelOrderWalker.java
> * Make code more concise
> * Fix some check style issues
> {code}
>   if (toWalk.get(index).getChildren() != null) {
> for(Node child : toWalk.get(index).getChildren()) {
> {code}
> Actually, the underlying implementation of {{getChildren()}} has to do some 
> real work, so do not throw away the work after checking for null.  Simply 
> call once and store the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20255) Review LevelOrderWalker.java

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20255:
---
Status: Open  (was: Patch Available)

> Review LevelOrderWalker.java
> 
>
> Key: HIVE-20255
> URL: https://issues.apache.org/jira/browse/HIVE-20255
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20255.10.patch, HIVE-20255.11.patch, 
> HIVE-20255.12.patch, HIVE-20255.13.patch, HIVE-20255.14.patch, 
> HIVE-20255.15.patch, HIVE-20255.16.patch, HIVE-20255.17.patch, 
> HIVE-20255.9.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lib/LevelOrderWalker.java
> * Make code more concise
> * Fix some check style issues
> {code}
>   if (toWalk.get(index).getChildren() != null) {
> for(Node child : toWalk.get(index).getChildren()) {
> {code}
> Actually, the underlying implementation of {{getChildren()}} has to do some 
> real work, so do not throw away the work after checking for null.  Simply 
> call once and store the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21070) HiveSchemaTool doesn't load hivemetastore-site.xml

2019-01-30 Thread peng bo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

peng bo updated HIVE-21070:
---
Attachment: (was: schemaLoadMetaConf.patch)

> HiveSchemaTool doesn't load hivemetastore-site.xml
> --
>
> Key: HIVE-21070
> URL: https://issues.apache.org/jira/browse/HIVE-21070
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.3.3
>Reporter: peng bo
>Assignee: peng bo
>Priority: Major
> Attachments: HIVE-21070.1.patch
>
>
> HiveSchemaTool doesn't load hivemetastore-site.xml in case of no-embedded 
> MetaStore.
> javax.jdo.option is server-side metastore property which is always defined in 
> hivemetastore-site.xml. HiveSchemaTool seems reasonable to always read this 
> file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21070) HiveSchemaTool doesn't load hivemetastore-site.xml

2019-01-30 Thread peng bo (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756813#comment-16756813
 ] 

peng bo commented on HIVE-21070:


[~thejas] Thanks for your quick reply.
I have renamed the file name as required.

> HiveSchemaTool doesn't load hivemetastore-site.xml
> --
>
> Key: HIVE-21070
> URL: https://issues.apache.org/jira/browse/HIVE-21070
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.3.3
>Reporter: peng bo
>Assignee: peng bo
>Priority: Major
> Attachments: HIVE-21070.1.patch
>
>
> HiveSchemaTool doesn't load hivemetastore-site.xml in case of no-embedded 
> MetaStore.
> javax.jdo.option is server-side metastore property which is always defined in 
> hivemetastore-site.xml. HiveSchemaTool seems reasonable to always read this 
> file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21070) HiveSchemaTool doesn't load hivemetastore-site.xml

2019-01-30 Thread peng bo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

peng bo updated HIVE-21070:
---
Attachment: HIVE-21070.1.patch

> HiveSchemaTool doesn't load hivemetastore-site.xml
> --
>
> Key: HIVE-21070
> URL: https://issues.apache.org/jira/browse/HIVE-21070
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.3.3
>Reporter: peng bo
>Assignee: peng bo
>Priority: Major
> Attachments: HIVE-21070.1.patch
>
>
> HiveSchemaTool doesn't load hivemetastore-site.xml in case of no-embedded 
> MetaStore.
> javax.jdo.option is server-side metastore property which is always defined in 
> hivemetastore-site.xml. HiveSchemaTool seems reasonable to always read this 
> file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21045) Add HMS total api count stats and connection pool stats to metrics

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756789#comment-16756789
 ] 

Hive QA commented on HIVE-21045:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-15854/patches/PreCommit-HIVE-Build-15854.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15854/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add HMS total api count stats and connection pool stats to metrics
> --
>
> Key: HIVE-21045
> URL: https://issues.apache.org/jira/browse/HIVE-21045
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21045.1.patch, HIVE-21045.2.branch-3.patch, 
> HIVE-21045.2.patch, HIVE-21045.3.patch, HIVE-21045.4.patch, 
> HIVE-21045.5.patch, HIVE-21045.6.patch, HIVE-21045.7.patch, 
> HIVE-21045.branch-3.patch
>
>
> There are two key metrics which I think we lack and which would be really 
> great to help with scaling visibility in HMS.
> *Total API calls duration stats*
> We already compute and log the duration of API calls in the {{PerfLogger}}. 
> We don't have any gauge or timer on what the average duration of an API call 
> is for the past some bucket of time. This will give us an insight into if 
> there is load on the server which is increasing the average API response time.
>  
> *Connection Pool stats*
> We can use different connection pooling libraries such as bonecp or hikaricp. 
> These pool managers expose statistics such as average time waiting to get a 
> connection, number of connections active, etc. We should expose this as a 
> metric so that we can track if the the connection pool size configured is too 
> small and we are saturating!
> These metrics would help catch problems with HMS resource contention before 
> they actually have jobs failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-30 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756780#comment-16756780
 ] 

Ashutosh Chauhan commented on HIVE-17938:
-

+1

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20484) Disable Block Cache By Default With HBase SerDe

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20484:
---
Status: Patch Available  (was: Open)

[~ngangam] Thank you for the review.  You know me, I love a good nit.  Thanks 
for showing me the way.  I have updated the patch accordingly.

> Disable Block Cache By Default With HBase SerDe
> ---
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch, 
> HIVE-20484.3.patch, HIVE-20484.4.patch, HIVE-20484.5.patch
>
>
> {quote}
> Scan instances can be set to use the block cache in the RegionServer via the 
> setCacheBlocks method. For input Scans to MapReduce jobs, this should be 
> false. 
> https://hbase.apache.org/book.html#perf.hbase.client.blockcache
> {quote}
> However, from the Hive code, we can see that this is not the case.
> {code}
> public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";
> ...
> String scanCacheBlocks = 
> tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
> }
> ...
> String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
> }
> {code}
> In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not 
> specified in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called 
> and the default value of the HBase {{Scan}} class is used.
> {code:java|title=Scan.java}
>   /**
>* Set whether blocks should be cached for this Scan.
>* 
>* This is true by default.  When true, default settings of the table and
>* family are used (this will never override caching blocks if the block
>* cache is disabled for that family or entirely).
>*
>* @param cacheBlocks if false, default settings are overridden and blocks
>* will not be cached
>*/
>   public Scan setCacheBlocks(boolean cacheBlocks) {
> this.cacheBlocks = cacheBlocks;
> return this;
>   }
> {code}
> Hive is doing full scans of the table with MapReduce/Spark and therefore, 
> according to the HBase docs, the default behavior here should be that blocks 
> are not cached.  Hive should set this value to "false" by default unless the 
> table {{SERDEPROPERTIES}} override this.
> {code:sql}
> -- Commands for HBase
> -- create 'test', 't'
> CREATE EXTERNAL TABLE test(value map, row_key string) 
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping" = "t:,:key",
> "hbase.scan.cacheblock" = "false"
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20797) Print Number of Locks Acquired

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20797:
---
Attachment: HIVE-20797.3.patch

> Print Number of Locks Acquired
> --
>
> Key: HIVE-20797
> URL: https://issues.apache.org/jira/browse/HIVE-20797
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, Locking
>Affects Versions: 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20797.1.patch, HIVE-20797.2.patch, 
> HIVE-20797.3.patch
>
>
> The number of locks acquired by a query can greatly influence the performance 
> and stability of the system, especially for ZK locks.  Please add INFO level 
> logging with the number of locks each query obtains.
> Log here:
> https://github.com/apache/hive/blob/3963c729fabf90009cb67d277d40fe5913936358/ql/src/java/org/apache/hadoop/hive/ql/Driver.java#L1670-L1672
> {quote}
> A list of acquired locks will be stored in the 
> org.apache.hadoop.hive.ql.Context object and can be retrieved via 
> org.apache.hadoop.hive.ql.Context#getHiveLocks.
> {quote}
> https://github.com/apache/hive/blob/758ff449099065a84c46d63f9418201c8a6731b1/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java#L115-L127



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20843) RELY constraints on primary keys and foreign keys are not recognized

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756786#comment-16756786
 ] 

Hive QA commented on HIVE-20843:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12946441/HIVE-20843.1-branch-2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15853/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15853/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15853/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-01-31 02:05:52.936
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15853/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z branch-2 ]]
+ [[ -d apache-github-branch-2-source ]]
+ [[ ! -d apache-github-branch-2-source/.git ]]
+ [[ ! -d apache-github-branch-2-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-01-31 02:05:52.977
+ cd apache-github-branch-2-source
+ git fetch origin
>From https://github.com/apache/hive
   55fcff1..0b8cfa7  branch-2   -> origin/branch-2
   00c0ee7..3b1d4fd  branch-2.3 -> origin/branch-2.3
   09b92d3..7065c92  branch-3   -> origin/branch-3
   f4e0529..7c21361  branch-3.1 -> origin/branch-3.1
   a99be34..dfc4b8e  master -> origin/master
   8151911..750daa4  master-tez092 -> origin/master-tez092
 * [new tag] rel/release-3.1.1 -> rel/release-3.1.1
+ git reset --hard HEAD
HEAD is now at 55fcff1 HIVE-20420: Provide a fallback authorizer when no other 
authorizer is in use (Daniel Dai, reviewed by Laszlo Pinter, Thejas Nair)
+ git clean -f -d
+ git checkout branch-2
Already on 'branch-2'
Your branch is behind 'origin/branch-2' by 2 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/branch-2
HEAD is now at 0b8cfa7 HIVE-21040 : msck does unnecessary file listing at last 
level of directory tree (Vihang Karajgaonkar, reviewed by Prasanth Jayachandran)
+ git merge --ff-only origin/branch-2
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-01-31 02:06:04.788
+ rm -rf ../yetus_PreCommit-HIVE-Build-15853
+ mkdir ../yetus_PreCommit-HIVE-Build-15853
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15853
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15853/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-branch-2-source/metastore/target/generated-sources/antlr3/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-branch-2-source/metastore/src/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer : Classpath
>>  /usr/share/maven/boot/plexus-classworlds-2.x.jar
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MDatabase
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MFieldSchema
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MType
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MTable
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MConstraint
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MSerDeInfo
ENHANCED (Persistable) : org.apache.hadoop.hive.metastore.model.MOrder
ENHANCED (Persistable) : 
org.apache.hadoop.hive.metastore.model.MColumnDescriptor
ENHANCED (Persistable) : org.apache.hadoop

[jira] [Commented] (HIVE-21044) Add SLF4J reporter to the metastore metrics system

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756784#comment-16756784
 ] 

Hive QA commented on HIVE-21044:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956945/HIVE-21044.2.branch-3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15852/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15852/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15852/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12956945/HIVE-21044.2.branch-3.patch
 was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956945 - PreCommit-HIVE-Build

> Add SLF4J reporter to the metastore metrics system
> --
>
> Key: HIVE-21044
> URL: https://issues.apache.org/jira/browse/HIVE-21044
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
>  Labels: metrics
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21044.1.patch, HIVE-21044.2.branch-3.patch, 
> HIVE-21044.2.patch, HIVE-21044.3.patch, HIVE-21044.4.patch, 
> HIVE-21044.branch-3.patch
>
>
> Lets add SLF4J reporter as an option in Metrics reporting system. Currently 
> we support JMX, JSON and Console reporting.
> We will add a new option to {{hive.service.metrics.reporter}} called SLF4J. 
> We can use the 
> {{[Slf4jReporter|https://metrics.dropwizard.io/3.1.0/apidocs/com/codahale/metrics/Slf4jReporter.html]}}
>  class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21029) External table replication for existing deployments running incremental replication.

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756782#comment-16756782
 ] 

Hive QA commented on HIVE-21029:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956921/HIVE-21029.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15721 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15851/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15851/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15851/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956921 - PreCommit-HIVE-Build

> External table replication for existing deployments running incremental 
> replication.
> 
>
> Key: HIVE-21029
> URL: https://issues.apache.org/jira/browse/HIVE-21029
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.0.0, 3.1.0, 3.1.1
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21029.01.patch, HIVE-21029.02.patch, 
> HIVE-21029.03.patch
>
>
> Existing deployments using hive replication do not get external tables 
> replicated. For such deployments to enable external table replication they 
> will have to provide a specific switch to first bootstrap external tables as 
> part of hive incremental replication, following which the incremental 
> replication will take care of further changes in external tables.
> The switch will be provided by an additional hive configuration (for ex: 
> hive.repl.bootstrap.external.tables) and is to be used in 
> {code} WITH {code}  clause of 
> {code} REPL DUMP {code} command. 
> Additionally the existing hive config _hive.repl.include.external.tables_  
> will always have to be set to "true" in the above clause. 
> Proposed usage for enabling external tables replication on existing 
> replication policy.
> 1. Consider an ongoing repl policy  in incremental phase.
> Enable hive.repl.include.external.tables=true and 
> hive.repl.bootstrap.external.tables=true in next incremental REPL DUMP.
> - Dumps all events but skips events related to external tables.
> - Instead, combine bootstrap dump for all external tables under “_bootstrap” 
> directory.
> - Also, includes the data locations file "_external_tables_info”.
> - LIMIT or TO clause shouldn’t be there to ensure the latest events are 
> dumped before bootstrap dumping external tables.
> 2. REPL LOAD on this dump applies all the events first, copies external 
> tables data and then bootstrap external tables (metadata).
> - It is possible that the external tables (metadata) are not point-in time 
> consistent with rest of the tables.
> - But, it would be eventually consistent when the next incremental load is 
> applied.
> - This REPL LOAD is fault tolerant and can be retried if failed.
> 3. All future REPL DUMPs on this repl policy should set 
> hive.repl.bootstrap.external.tables=false.
> - If not set to false, then target might end up having inconsistent set of 
> external tables as bootstrap wouldn’t clean-up any dropped external tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20484) Disable Block Cache By Default With HBase SerDe

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20484:
---
Attachment: HIVE-20484.5.patch

> Disable Block Cache By Default With HBase SerDe
> ---
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch, 
> HIVE-20484.3.patch, HIVE-20484.4.patch, HIVE-20484.5.patch
>
>
> {quote}
> Scan instances can be set to use the block cache in the RegionServer via the 
> setCacheBlocks method. For input Scans to MapReduce jobs, this should be 
> false. 
> https://hbase.apache.org/book.html#perf.hbase.client.blockcache
> {quote}
> However, from the Hive code, we can see that this is not the case.
> {code}
> public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";
> ...
> String scanCacheBlocks = 
> tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
> }
> ...
> String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
> }
> {code}
> In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not 
> specified in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called 
> and the default value of the HBase {{Scan}} class is used.
> {code:java|title=Scan.java}
>   /**
>* Set whether blocks should be cached for this Scan.
>* 
>* This is true by default.  When true, default settings of the table and
>* family are used (this will never override caching blocks if the block
>* cache is disabled for that family or entirely).
>*
>* @param cacheBlocks if false, default settings are overridden and blocks
>* will not be cached
>*/
>   public Scan setCacheBlocks(boolean cacheBlocks) {
> this.cacheBlocks = cacheBlocks;
> return this;
>   }
> {code}
> Hive is doing full scans of the table with MapReduce/Spark and therefore, 
> according to the HBase docs, the default behavior here should be that blocks 
> are not cached.  Hive should set this value to "false" by default unless the 
> table {{SERDEPROPERTIES}} override this.
> {code:sql}
> -- Commands for HBase
> -- create 'test', 't'
> CREATE EXTERNAL TABLE test(value map, row_key string) 
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping" = "t:,:key",
> "hbase.scan.cacheblock" = "false"
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20849:
---
Status: Open  (was: Patch Available)

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, HIVE-20849.5.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20484) Disable Block Cache By Default With HBase SerDe

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20484:
---
Status: Open  (was: Patch Available)

> Disable Block Cache By Default With HBase SerDe
> ---
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch, 
> HIVE-20484.3.patch, HIVE-20484.4.patch
>
>
> {quote}
> Scan instances can be set to use the block cache in the RegionServer via the 
> setCacheBlocks method. For input Scans to MapReduce jobs, this should be 
> false. 
> https://hbase.apache.org/book.html#perf.hbase.client.blockcache
> {quote}
> However, from the Hive code, we can see that this is not the case.
> {code}
> public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";
> ...
> String scanCacheBlocks = 
> tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
> }
> ...
> String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
> }
> {code}
> In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not 
> specified in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called 
> and the default value of the HBase {{Scan}} class is used.
> {code:java|title=Scan.java}
>   /**
>* Set whether blocks should be cached for this Scan.
>* 
>* This is true by default.  When true, default settings of the table and
>* family are used (this will never override caching blocks if the block
>* cache is disabled for that family or entirely).
>*
>* @param cacheBlocks if false, default settings are overridden and blocks
>* will not be cached
>*/
>   public Scan setCacheBlocks(boolean cacheBlocks) {
> this.cacheBlocks = cacheBlocks;
> return this;
>   }
> {code}
> Hive is doing full scans of the table with MapReduce/Spark and therefore, 
> according to the HBase docs, the default behavior here should be that blocks 
> are not cached.  Hive should set this value to "false" by default unless the 
> table {{SERDEPROPERTIES}} override this.
> {code:sql}
> -- Commands for HBase
> -- create 'test', 't'
> CREATE EXTERNAL TABLE test(value map, row_key string) 
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping" = "t:,:key",
> "hbase.scan.cacheblock" = "false"
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20797) Print Number of Locks Acquired

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20797:
---
Status: Open  (was: Patch Available)

> Print Number of Locks Acquired
> --
>
> Key: HIVE-20797
> URL: https://issues.apache.org/jira/browse/HIVE-20797
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, Locking
>Affects Versions: 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20797.1.patch, HIVE-20797.2.patch, 
> HIVE-20797.3.patch
>
>
> The number of locks acquired by a query can greatly influence the performance 
> and stability of the system, especially for ZK locks.  Please add INFO level 
> logging with the number of locks each query obtains.
> Log here:
> https://github.com/apache/hive/blob/3963c729fabf90009cb67d277d40fe5913936358/ql/src/java/org/apache/hadoop/hive/ql/Driver.java#L1670-L1672
> {quote}
> A list of acquired locks will be stored in the 
> org.apache.hadoop.hive.ql.Context object and can be retrieved via 
> org.apache.hadoop.hive.ql.Context#getHiveLocks.
> {quote}
> https://github.com/apache/hive/blob/758ff449099065a84c46d63f9418201c8a6731b1/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java#L115-L127



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20849:
---
Attachment: HIVE-20849.5.patch

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, HIVE-20849.5.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20849:
---
Status: Patch Available  (was: Open)

Fixed checkstyle errors.

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, HIVE-20849.5.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20797) Print Number of Locks Acquired

2019-01-30 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20797:
---
Status: Patch Available  (was: Open)

> Print Number of Locks Acquired
> --
>
> Key: HIVE-20797
> URL: https://issues.apache.org/jira/browse/HIVE-20797
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, Locking
>Affects Versions: 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20797.1.patch, HIVE-20797.2.patch, 
> HIVE-20797.3.patch
>
>
> The number of locks acquired by a query can greatly influence the performance 
> and stability of the system, especially for ZK locks.  Please add INFO level 
> logging with the number of locks each query obtains.
> Log here:
> https://github.com/apache/hive/blob/3963c729fabf90009cb67d277d40fe5913936358/ql/src/java/org/apache/hadoop/hive/ql/Driver.java#L1670-L1672
> {quote}
> A list of acquired locks will be stored in the 
> org.apache.hadoop.hive.ql.Context object and can be retrieved via 
> org.apache.hadoop.hive.ql.Context#getHiveLocks.
> {quote}
> https://github.com/apache/hive/blob/758ff449099065a84c46d63f9418201c8a6731b1/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java#L115-L127



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21029) External table replication for existing deployments running incremental replication.

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756767#comment-16756767
 ] 

Hive QA commented on HIVE-21029:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
28s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 1 new + 266 unchanged - 2 
fixed = 267 total (was 268) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
40s{color} | {color:red} ql generated 1 new + 2303 unchanged - 1 fixed = 2304 
total (was 2304) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Write to static field 
org.apache.hadoop.hive.ql.exec.repl.incremental.IncrementalLoadTasksBuilder.numIteration
 from instance method 
org.apache.hadoop.hive.ql.exec.repl.incremental.IncrementalLoadTasksBuilder.build(DriverContext,
 Hive, Logger, TaskTracker)  At IncrementalLoadTasksBuilder.java:from instance 
method 
org.apache.hadoop.hive.ql.exec.repl.incremental.IncrementalLoadTasksBuilder.build(DriverContext,
 Hive, Logger, TaskTracker)  At IncrementalLoadTasksBuilder.java:[line 100] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15851/dev-support/hive-personality.sh
 |
| git revision | master / dfc4b8e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15851/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15851/yetus/new-findbugs-ql.html
 |
| modules | C: common ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15851/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> External table replication for existing deployments running incremental 
> replication.
> 
>
> Key: HIV

[jira] [Updated] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-30 Thread Thejas M Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-17938:
-
Status: Patch Available  (was: Open)

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-30 Thread Thejas M Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-17938:
-
Attachment: HIVE-17938.3.patch

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-30 Thread Thejas M Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-17938:
-
Attachment: HIVE-17938.3.patch

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-30 Thread Thejas M Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-17938:
-
Attachment: (was: HIVE-17938.3.patch)

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756743#comment-16756743
 ] 

Hive QA commented on HIVE-21143:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956919/HIVE-21143.09.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15718 tests 
executed
*Failed tests:*
{noformat}
TestReplicationScenariosIncrementalLoadAcidTables - did not produce a 
TEST-*.xml file (likely timed out) (batchId=251)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_interval_2]
 (batchId=177)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15850/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15850/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15850/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956919 - PreCommit-HIVE-Build

> Add rewrite rules to open/close Between operators
> -
>
> Key: HIVE-21143
> URL: https://issues.apache.org/jira/browse/HIVE-21143
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21143.01.patch, HIVE-21143.02.patch, 
> HIVE-21143.03.patch, HIVE-21143.03.patch, HIVE-21143.04.patch, 
> HIVE-21143.05.patch, HIVE-21143.06.patch, HIVE-21143.07.patch, 
> HIVE-21143.08.patch, HIVE-21143.08.patch, HIVE-21143.09.patch
>
>
> During query compilation it's better to have BETWEEN statements in open form, 
> as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-30 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756733#comment-16756733
 ] 

Ashutosh Chauhan commented on HIVE-21188:
-

+1 pending tests

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21189) hive.merge.nway.joins should default to false

2019-01-30 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-21189:

Attachment: HIVE-21189.patch

> hive.merge.nway.joins should default to false
> -
>
> Key: HIVE-21189
> URL: https://issues.apache.org/jira/browse/HIVE-21189
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-21189.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21189) hive.merge.nway.joins should default to false

2019-01-30 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-21189:

Status: Patch Available  (was: Open)

> hive.merge.nway.joins should default to false
> -
>
> Key: HIVE-21189
> URL: https://issues.apache.org/jira/browse/HIVE-21189
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-21189.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21189) hive.merge.nway.joins should default to false

2019-01-30 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-21189:
---


> hive.merge.nway.joins should default to false
> -
>
> Key: HIVE-21189
> URL: https://issues.apache.org/jira/browse/HIVE-21189
> Project: Hive
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21188) SemanticAnalyzerException for query on view with masked table

2019-01-30 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-21188:
--


> SemanticAnalyzerException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21188) SemanticAnalyzerException for query on view with masked table

2019-01-30 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21188:
---
Status: Patch Available  (was: In Progress)

> SemanticAnalyzerException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21188) SemanticException for query on view with masked table

2019-01-30 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21188:
---
Summary: SemanticException for query on view with masked table  (was: 
SemanticAnalyzerException for query on view with masked table)

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-21188) SemanticAnalyzerException for query on view with masked table

2019-01-30 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-21188 started by Jesus Camacho Rodriguez.
--
> SemanticAnalyzerException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21188) SemanticAnalyzerException for query on view with masked table

2019-01-30 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21188:
---
Attachment: HIVE-21188.patch

> SemanticAnalyzerException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756701#comment-16756701
 ] 

Hive QA commented on HIVE-21143:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
37s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 11 new + 116 unchanged - 3 
fixed = 127 total (was 119) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 14 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15850/dev-support/hive-personality.sh
 |
| git revision | master / dfc4b8e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15850/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15850/yetus/whitespace-eol.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15850/yetus/whitespace-tabs.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15850/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add rewrite rules to open/close Between operators
> -
>
> Key: HIVE-21143
> URL: https://issues.apache.org/jira/browse/HIVE-21143
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21143.01.patch, HIVE-21143.02.patch, 
> HIVE-21143.03.patch, HIVE-21143.03.patch, HIVE-21143.04.patch, 
> HIVE-21143.05.patch, HIVE-21143.06.patch, HIVE-21143.07.patch, 
> HIVE-21143.08.patch, HIVE-21143.08.patch, HIVE-21143.09.patch
>
>
> During query compilation it's better to have BETWEEN statements in open form, 
> as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21044) Add SLF4J reporter to the metastore metrics system

2019-01-30 Thread Karthik Manamcheri (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756695#comment-16756695
 ] 

Karthik Manamcheri commented on HIVE-21044:
---

[~pvary] Looks like branch-3 tests are currently not be able to run because of 
HIVE-21180. This change touches code in standalone-metastore and I was able to 
successfully run all the unit tests under standalone-metastore on my 
development machine. Can we merge to branch-3? Thanks.

> Add SLF4J reporter to the metastore metrics system
> --
>
> Key: HIVE-21044
> URL: https://issues.apache.org/jira/browse/HIVE-21044
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
>  Labels: metrics
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21044.1.patch, HIVE-21044.2.branch-3.patch, 
> HIVE-21044.2.patch, HIVE-21044.3.patch, HIVE-21044.4.patch, 
> HIVE-21044.branch-3.patch
>
>
> Lets add SLF4J reporter as an option in Metrics reporting system. Currently 
> we support JMX, JSON and Console reporting.
> We will add a new option to {{hive.service.metrics.reporter}} called SLF4J. 
> We can use the 
> {{[Slf4jReporter|https://metrics.dropwizard.io/3.1.0/apidocs/com/codahale/metrics/Slf4jReporter.html]}}
>  class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21045) Add HMS total api count stats and connection pool stats to metrics

2019-01-30 Thread Karthik Manamcheri (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756684#comment-16756684
 ] 

Karthik Manamcheri commented on HIVE-21045:
---

[~ngangam] [~ychena] Looks like branch-3 tests are currently not be able to run 
because of HIVE-21180. This change touches code in standalone-metastore and I 
was able to successfully run all the unit tests under standalone-metastore on 
my development machine. Can we merge to branch-3? Thanks.

> Add HMS total api count stats and connection pool stats to metrics
> --
>
> Key: HIVE-21045
> URL: https://issues.apache.org/jira/browse/HIVE-21045
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21045.1.patch, HIVE-21045.2.branch-3.patch, 
> HIVE-21045.2.patch, HIVE-21045.3.patch, HIVE-21045.4.patch, 
> HIVE-21045.5.patch, HIVE-21045.6.patch, HIVE-21045.7.patch, 
> HIVE-21045.branch-3.patch
>
>
> There are two key metrics which I think we lack and which would be really 
> great to help with scaling visibility in HMS.
> *Total API calls duration stats*
> We already compute and log the duration of API calls in the {{PerfLogger}}. 
> We don't have any gauge or timer on what the average duration of an API call 
> is for the past some bucket of time. This will give us an insight into if 
> there is load on the server which is increasing the average API response time.
>  
> *Connection Pool stats*
> We can use different connection pooling libraries such as bonecp or hikaricp. 
> These pool managers expose statistics such as average time waiting to get a 
> connection, number of connections active, etc. We should expose this as a 
> metric so that we can track if the the connection pool size configured is too 
> small and we are saturating!
> These metrics would help catch problems with HMS resource contention before 
> they actually have jobs failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21044) Add SLF4J reporter to the metastore metrics system

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756673#comment-16756673
 ] 

Hive QA commented on HIVE-21044:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956945/HIVE-21044.2.branch-3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 136 failed/errored test(s), 14400 tests 
executed
*Failed tests:*
{noformat}
TestAddPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestAddPartitionsFromPartSpec - did not produce a TEST-*.xml file (likely timed 
out) (batchId=230)
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestAggregateStatsCache - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestAlterPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestAppendPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=274)
TestCachedStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestCatalogCaching - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestCatalogNonDefaultClient - did not produce a TEST-*.xml file (likely timed 
out) (batchId=228)
TestCatalogNonDefaultSvr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestCatalogOldClient - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestCatalogs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestCheckConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=238)
TestDatabases - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDeadline - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestDefaultConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDropPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=274)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=231)
TestExchangePartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestFMSketchSerialization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=239)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestForeignKey - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestFunctions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestGetPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestGetPartitionsUsingProjectionAndFilterSpecs - did not produce a TEST-*.xml 
file (likely timed out) (batchId=230)
TestGetTableMeta - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestHLLNoBias - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHLLSerialization - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHdfsUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestHiveAlterHandler - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestHiveMetaStoreGetMetaConf - did not produce a TEST-*.xml file (likely timed 
out) (batchId=238)
TestHiveMetaStorePartitionSpecs - did not produce a TEST-*.xml file (likely 
timed out) (batchId=230)
TestHiveMetaStoreSchemaMethods - did not produce a TEST-*.xml file (likely 
timed out) (batchId=235)
TestHiveMetaStoreTimeout - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=233)
TestHiveMetaToolCommandLine - did not produce a TEST-*.xml file (likely timed 
out) (batchId=235)
TestHiveMetastoreCli - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestHyperLogLog - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHyperLogLogDense - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHyperLogLogMerge - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestHyperLogLogSparse - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestJSONMessageDeserializer - did not produce a TEST-*.xml file (likely timed 
out) (batchId=235)
TestListPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestLockRequestBuilder - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestMarkPartition - d

[jira] [Commented] (HIVE-20484) Disable Block Cache By Default With HBase SerDe

2019-01-30 Thread Naveen Gangam (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756668#comment-16756668
 ] 

Naveen Gangam commented on HIVE-20484:
--

[~belugabehr] Fix looks good to me. Just a nit. We can use the getBoolean 
directly instead of retrieving as string and parsing as a boolean.
{code}
jobConf.getBoolean(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, false);
{code}

Otherwise +1 for me. 

> Disable Block Cache By Default With HBase SerDe
> ---
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch, 
> HIVE-20484.3.patch, HIVE-20484.4.patch
>
>
> {quote}
> Scan instances can be set to use the block cache in the RegionServer via the 
> setCacheBlocks method. For input Scans to MapReduce jobs, this should be 
> false. 
> https://hbase.apache.org/book.html#perf.hbase.client.blockcache
> {quote}
> However, from the Hive code, we can see that this is not the case.
> {code}
> public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";
> ...
> String scanCacheBlocks = 
> tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
> }
> ...
> String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
> }
> {code}
> In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not 
> specified in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called 
> and the default value of the HBase {{Scan}} class is used.
> {code:java|title=Scan.java}
>   /**
>* Set whether blocks should be cached for this Scan.
>* 
>* This is true by default.  When true, default settings of the table and
>* family are used (this will never override caching blocks if the block
>* cache is disabled for that family or entirely).
>*
>* @param cacheBlocks if false, default settings are overridden and blocks
>* will not be cached
>*/
>   public Scan setCacheBlocks(boolean cacheBlocks) {
> this.cacheBlocks = cacheBlocks;
> return this;
>   }
> {code}
> Hive is doing full scans of the table with MapReduce/Spark and therefore, 
> according to the HBase docs, the default behavior here should be that blocks 
> are not cached.  Hive should set this value to "false" by default unless the 
> table {{SERDEPROPERTIES}} override this.
> {code:sql}
> -- Commands for HBase
> -- create 'test', 't'
> CREATE EXTERNAL TABLE test(value map, row_key string) 
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping" = "t:,:key",
> "hbase.scan.cacheblock" = "false"
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21044) Add SLF4J reporter to the metastore metrics system

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756612#comment-16756612
 ] 

Hive QA commented on HIVE-21044:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 14s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-15849/patches/PreCommit-HIVE-Build-15849.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15849/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add SLF4J reporter to the metastore metrics system
> --
>
> Key: HIVE-21044
> URL: https://issues.apache.org/jira/browse/HIVE-21044
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
>  Labels: metrics
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21044.1.patch, HIVE-21044.2.branch-3.patch, 
> HIVE-21044.2.patch, HIVE-21044.3.patch, HIVE-21044.4.patch, 
> HIVE-21044.branch-3.patch
>
>
> Lets add SLF4J reporter as an option in Metrics reporting system. Currently 
> we support JMX, JSON and Console reporting.
> We will add a new option to {{hive.service.metrics.reporter}} called SLF4J. 
> We can use the 
> {{[Slf4jReporter|https://metrics.dropwizard.io/3.1.0/apidocs/com/codahale/metrics/Slf4jReporter.html]}}
>  class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20797) Print Number of Locks Acquired

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756603#comment-16756603
 ] 

Hive QA commented on HIVE-20797:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956907/HIVE-20797.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 791 failed/errored test(s), 12107 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_index] 
(batchId=267)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=267)
org.apache.hadoop.hive.cli.TestBeeLineDriver.org.apache.hadoop.hive.cli.TestBeeLineDriver
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[colstats_all_nulls] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[create_merge_compressed]
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[explain_outputs] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[insert_overwrite_local_directory_1]
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[mapjoin2] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[select_dummy_source] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_10] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_11] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_12] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_13] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_16] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_1] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_2] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_3] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_7] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[udf_unix_timestamp] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[buckets] 
(batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_database]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_like] 
(batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_hdfs]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_hdfs_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[explain] 
(batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[having] 
(batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_local]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_warehouse]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_local_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore_nonpart]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_local]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse_nonpart]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_local_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_blobstore_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_empty_into_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_table]
 (batchId=278)
org.apache.hadoop.hive.cli.Te

[jira] [Updated] (HIVE-21184) Add Calcite plan to QueryPlan object

2019-01-30 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21184:
---
Attachment: HIVE-21184.04.patch

> Add Calcite plan to QueryPlan object
> 
>
> Key: HIVE-21184
> URL: https://issues.apache.org/jira/browse/HIVE-21184
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21184.01.patch, HIVE-21184.03.patch, 
> HIVE-21184.04.patch
>
>
> Plan is more readable than full DAG. Explain formatted/extended will print 
> the plan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20797) Print Number of Locks Acquired

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756577#comment-16756577
 ] 

Hive QA commented on HIVE-20797:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
40s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15848/dev-support/hive-personality.sh
 |
| git revision | master / dfc4b8e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15848/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Print Number of Locks Acquired
> --
>
> Key: HIVE-20797
> URL: https://issues.apache.org/jira/browse/HIVE-20797
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, Locking
>Affects Versions: 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20797.1.patch, HIVE-20797.2.patch
>
>
> The number of locks acquired by a query can greatly influence the performance 
> and stability of the system, especially for ZK locks.  Please add INFO level 
> logging with the number of locks each query obtains.
> Log here:
> https://github.com/apache/hive/blob/3963c729fabf90009cb67d277d40fe5913936358/ql/src/java/org/apache/hadoop/hive/ql/Driver.java#L1670-L1672
> {quote}
> A list of acquired locks will be stored in the 
> org.apache.hadoop.hive.ql.Context object and can be retrieved via 
> org.apache.hadoop.hive.ql.Context#getHiveLocks.
> {quote}
> https://github.com/apache/hive/blob/758ff449099065a84c46d63f9418201c8a6731b1/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java#L115-L127



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21182) Skip setting up hive scratch dir during planning

2019-01-30 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21182:
---
Status: Patch Available  (was: Open)

> Skip setting up hive scratch dir during planning
> 
>
> Key: HIVE-21182
> URL: https://issues.apache.org/jira/browse/HIVE-21182
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21182.1.patch, HIVE-21182.2.patch
>
>
> During metadata gathering phase hive creates staging/scratch dir which is 
> further used by FS op (FS op sets up staging dir within this dir for tasks to 
> write to).
> Since FS op do mkdirs to setup staging dir we can skip creating scratch dir 
> during metadata gathering phase. FS op will take care of setting up all the 
> dirs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21182) Skip setting up hive scratch dir during planning

2019-01-30 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21182:
---
Status: Open  (was: Patch Available)

> Skip setting up hive scratch dir during planning
> 
>
> Key: HIVE-21182
> URL: https://issues.apache.org/jira/browse/HIVE-21182
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21182.1.patch, HIVE-21182.2.patch
>
>
> During metadata gathering phase hive creates staging/scratch dir which is 
> further used by FS op (FS op sets up staging dir within this dir for tasks to 
> write to).
> Since FS op do mkdirs to setup staging dir we can skip creating scratch dir 
> during metadata gathering phase. FS op will take care of setting up all the 
> dirs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21182) Skip setting up hive scratch dir during planning

2019-01-30 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21182:
---
Attachment: HIVE-21182.2.patch

> Skip setting up hive scratch dir during planning
> 
>
> Key: HIVE-21182
> URL: https://issues.apache.org/jira/browse/HIVE-21182
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21182.1.patch, HIVE-21182.2.patch
>
>
> During metadata gathering phase hive creates staging/scratch dir which is 
> further used by FS op (FS op sets up staging dir within this dir for tasks to 
> write to).
> Since FS op do mkdirs to setup staging dir we can skip creating scratch dir 
> during metadata gathering phase. FS op will take care of setting up all the 
> dirs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20255) Review LevelOrderWalker.java

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756550#comment-16756550
 ] 

Hive QA commented on HIVE-20255:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956902/HIVE-20255.17.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15718 tests 
executed
*Failed tests:*
{noformat}
TestReplicationScenariosIncrementalLoadAcidTables - did not produce a 
TEST-*.xml file (likely timed out) (batchId=251)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15847/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15847/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15847/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956902 - PreCommit-HIVE-Build

> Review LevelOrderWalker.java
> 
>
> Key: HIVE-20255
> URL: https://issues.apache.org/jira/browse/HIVE-20255
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20255.10.patch, HIVE-20255.11.patch, 
> HIVE-20255.12.patch, HIVE-20255.13.patch, HIVE-20255.14.patch, 
> HIVE-20255.15.patch, HIVE-20255.16.patch, HIVE-20255.17.patch, 
> HIVE-20255.9.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lib/LevelOrderWalker.java
> * Make code more concise
> * Fix some check style issues
> {code}
>   if (toWalk.get(index).getChildren() != null) {
> for(Node child : toWalk.get(index).getChildren()) {
> {code}
> Actually, the underlying implementation of {{getChildren()}} has to do some 
> real work, so do not throw away the work after checking for null.  Simply 
> call once and store the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21070) HiveSchemaTool doesn't load hivemetastore-site.xml

2019-01-30 Thread Thejas M Nair (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756532#comment-16756532
 ] 

Thejas M Nair commented on HIVE-21070:
--

Change looks good to me.
Please rename a patch file as per recommendation in - 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-CreatingaPatch

> HiveSchemaTool doesn't load hivemetastore-site.xml
> --
>
> Key: HIVE-21070
> URL: https://issues.apache.org/jira/browse/HIVE-21070
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.3.3
>Reporter: peng bo
>Assignee: peng bo
>Priority: Major
> Attachments: schemaLoadMetaConf.patch
>
>
> HiveSchemaTool doesn't load hivemetastore-site.xml in case of no-embedded 
> MetaStore.
> javax.jdo.option is server-side metastore property which is always defined in 
> hivemetastore-site.xml. HiveSchemaTool seems reasonable to always read this 
> file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20977) Lazy evaluate the table object in PreReadTableEvent to improve get_partition performance

2019-01-30 Thread Karthik Manamcheri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Manamcheri updated HIVE-20977:
--
Fix Version/s: 4.0.0

> Lazy evaluate the table object in PreReadTableEvent to improve get_partition 
> performance
> 
>
> Key: HIVE-20977
> URL: https://issues.apache.org/jira/browse/HIVE-20977
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-20977.1.patch, HIVE-20977.2.patch, 
> HIVE-20977.3.patch, HIVE-20977.4.patch
>
>
> The PreReadTableEvent is generated for non-table operations (such as 
> get_partitions), but only if there is an event listener attached. However, 
> this is also not necessary if the event listener is not interested in the 
> read table event.
> For example, the TransactionalValidationListener's onEvent looks like this
> {code:java}
> @Override
> public void onEvent(PreEventContext context) throws MetaException, 
> NoSuchObjectException,
> InvalidOperationException {
>   switch (context.getEventType()) {
> case CREATE_TABLE:
>   handle((PreCreateTableEvent) context);
>   break;
> case ALTER_TABLE:
>   handle((PreAlterTableEvent) context);
>   break;
> default:
>   //no validation required..
>   }
> }{code}
>  
> Note that for read table events it is a no-op. The problem is that the 
> get_table is evaluated when creating the PreReadTableEvent finally to be just 
> ignored!
> Look at the code below.. {{getMS().getTable(..)}} is evaluated irrespective 
> of if the listener uses it or not.
> {code:java}
> private void fireReadTablePreEvent(String catName, String dbName, String 
> tblName)
> throws MetaException, NoSuchObjectException {
>   if(preListeners.size() > 0) {
> // do this only if there is a pre event listener registered (avoid 
> unnecessary
> // metastore api call)
> Table t = getMS().getTable(catName, dbName, tblName);
> if (t == null) {
>   throw new NoSuchObjectException(TableName.getQualified(catName, dbName, 
> tblName)
>   + " table not found");
> }
> firePreEvent(new PreReadTableEvent(t, this));
>   }
> }
> {code}
> This can be improved by using a {{Supplier}} and lazily evaluating the table 
> when needed (once when the first time it is called, memorized after that).
> *Motivation*
> Whenever a partition call occurs (get_partition, etc.), we fire the 
> PreReadTableEvent. This affects performance since it fetches the table even 
> if it is not being used. This change will improve performance on the 
> get_partition calls.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20255) Review LevelOrderWalker.java

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756526#comment-16756526
 ] 

Hive QA commented on HIVE-20255:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
38s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} ql: The patch generated 0 new + 1 unchanged - 2 
fixed = 1 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15847/dev-support/hive-personality.sh
 |
| git revision | master / dfc4b8e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15847/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Review LevelOrderWalker.java
> 
>
> Key: HIVE-20255
> URL: https://issues.apache.org/jira/browse/HIVE-20255
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20255.10.patch, HIVE-20255.11.patch, 
> HIVE-20255.12.patch, HIVE-20255.13.patch, HIVE-20255.14.patch, 
> HIVE-20255.15.patch, HIVE-20255.16.patch, HIVE-20255.17.patch, 
> HIVE-20255.9.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lib/LevelOrderWalker.java
> * Make code more concise
> * Fix some check style issues
> {code}
>   if (toWalk.get(index).getChildren() != null) {
> for(Node child : toWalk.get(index).getChildren()) {
> {code}
> Actually, the underlying implementation of {{getChildren()}} has to do some 
> real work, so do not throw away the work after checking for null.  Simply 
> call once and store the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20843) RELY constraints on primary keys and foreign keys are not recognized

2019-01-30 Thread Karen Coppage (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756521#comment-16756521
 ] 

Karen Coppage commented on HIVE-20843:
--

Hi [~anuragmantri] , have you considered adding a test to this patch? If not, 
do you know how the bug was discovered?
Thanks, Karen

> RELY constraints on primary keys and foreign keys are not recognized
> 
>
> Key: HIVE-20843
> URL: https://issues.apache.org/jira/browse/HIVE-20843
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1, 3.0.0
>Reporter: Anurag Mantripragada
>Assignee: Anurag Mantripragada
>Priority: Major
> Attachments: HIVE-20843.1-branch-2.patch, HIVE-20843.1.patch
>
>
> Hive doesn't recognize RELY constraints after 
> https://issues.apache.org/jira/browse/HIVE-13076. The issue is in  
> BaseSemanticAnalyzer.java where we assign RELY.
> An unrelated patch fixed this issue in later versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21177) Optimize AcidUtils.getLogicalLength()

2019-01-30 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756510#comment-16756510
 ] 

Prasanth Jayachandran commented on HIVE-21177:
--

Looks like only path is used inside ParsedDeltaLight. So this 
fs.getFileStatus() call can be avoided? One less fs operation. 

> Optimize AcidUtils.getLogicalLength()
> -
>
> Key: HIVE-21177
> URL: https://issues.apache.org/jira/browse/HIVE-21177
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-21177.01.patch, HIVE-21177.02.patch
>
>
> {{AcidUtils.getLogicalLength()}} - tries look for the side file 
> {{OrcAcidUtils.getSideFile()}} on the file system even when the file couldn't 
> possibly be there, e.g. when the path is delta_x_x or base_x.  It could only 
> be there in delta_x_y, x != y.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21045) Add HMS total api count stats and connection pool stats to metrics

2019-01-30 Thread Karthik Manamcheri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Manamcheri updated HIVE-21045:
--
Attachment: HIVE-21045.2.branch-3.patch

> Add HMS total api count stats and connection pool stats to metrics
> --
>
> Key: HIVE-21045
> URL: https://issues.apache.org/jira/browse/HIVE-21045
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21045.1.patch, HIVE-21045.2.branch-3.patch, 
> HIVE-21045.2.patch, HIVE-21045.3.patch, HIVE-21045.4.patch, 
> HIVE-21045.5.patch, HIVE-21045.6.patch, HIVE-21045.7.patch, 
> HIVE-21045.branch-3.patch
>
>
> There are two key metrics which I think we lack and which would be really 
> great to help with scaling visibility in HMS.
> *Total API calls duration stats*
> We already compute and log the duration of API calls in the {{PerfLogger}}. 
> We don't have any gauge or timer on what the average duration of an API call 
> is for the past some bucket of time. This will give us an insight into if 
> there is load on the server which is increasing the average API response time.
>  
> *Connection Pool stats*
> We can use different connection pooling libraries such as bonecp or hikaricp. 
> These pool managers expose statistics such as average time waiting to get a 
> connection, number of connections active, etc. We should expose this as a 
> metric so that we can track if the the connection pool size configured is too 
> small and we are saturating!
> These metrics would help catch problems with HMS resource contention before 
> they actually have jobs failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756497#comment-16756497
 ] 

Hive QA commented on HIVE-20849:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956903/HIVE-20849.4.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15720 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15846/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15846/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15846/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956903 - PreCommit-HIVE-Build

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21044) Add SLF4J reporter to the metastore metrics system

2019-01-30 Thread Karthik Manamcheri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Manamcheri updated HIVE-21044:
--
Attachment: HIVE-21044.2.branch-3.patch

> Add SLF4J reporter to the metastore metrics system
> --
>
> Key: HIVE-21044
> URL: https://issues.apache.org/jira/browse/HIVE-21044
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
>  Labels: metrics
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21044.1.patch, HIVE-21044.2.branch-3.patch, 
> HIVE-21044.2.patch, HIVE-21044.3.patch, HIVE-21044.4.patch, 
> HIVE-21044.branch-3.patch
>
>
> Lets add SLF4J reporter as an option in Metrics reporting system. Currently 
> we support JMX, JSON and Console reporting.
> We will add a new option to {{hive.service.metrics.reporter}} called SLF4J. 
> We can use the 
> {{[Slf4jReporter|https://metrics.dropwizard.io/3.1.0/apidocs/com/codahale/metrics/Slf4jReporter.html]}}
>  class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756438#comment-16756438
 ] 

Hive QA commented on HIVE-20849:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
31s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
36s{color} | {color:red} ql: The patch generated 3 new + 91 unchanged - 3 fixed 
= 94 total (was 94) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} ql generated 0 new + 2301 unchanged - 3 fixed = 2301 
total (was 2304) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15846/dev-support/hive-personality.sh
 |
| git revision | master / dfc4b8e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15846/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15846/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20295) Remove !isNumber check after failed constant interpretation

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756393#comment-16756393
 ] 

Hive QA commented on HIVE-20295:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956901/HIVE-20295.04.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 15762 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_const_type] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=18)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_0]
 (batchId=182)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_0]
 (batchId=118)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_0] 
(batchId=149)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15845/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15845/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15845/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956901 - PreCommit-HIVE-Build

> Remove !isNumber check after failed constant interpretation
> ---
>
> Key: HIVE-20295
> URL: https://issues.apache.org/jira/browse/HIVE-20295
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-20295.01.patch, HIVE-20295.02.patch, 
> HIVE-20295.03.patch, HIVE-20295.04.patch
>
>
> During constant interpretation; if the number can't be parsed - it might be 
> possible that the comparsion is out of range for the type in question - in 
> which case it could be removed.
> https://github.com/apache/hive/blob/2cabb8da150b8fb980223fbd6c2c93b842ca3ee5/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java#L1163



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21187) OptimizedSql is not shown when the expression contains BETWEENs

2019-01-30 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756371#comment-16756371
 ] 

Jesus Camacho Rodriguez commented on HIVE-21187:


[~kgyrtkirk], I will take care of this, I just realized that this is part of 
HIVE-20822 that will go in once Calcite 1.18 update is committed.

> OptimizedSql is not shown when the expression contains BETWEENs
> ---
>
> Key: HIVE-21187
> URL: https://issues.apache.org/jira/browse/HIVE-21187
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> in the patch for HIVE-21143 we see that a lot of optimized sql printouts are 
> going away because of this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21187) OptimizedSql is not shown when the expression contains BETWEENs

2019-01-30 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-21187:
--

Assignee: Jesus Camacho Rodriguez  (was: Zoltan Haindrich)

> OptimizedSql is not shown when the expression contains BETWEENs
> ---
>
> Key: HIVE-21187
> URL: https://issues.apache.org/jira/browse/HIVE-21187
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> in the patch for HIVE-21143 we see that a lot of optimized sql printouts are 
> going away because of this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21045) Add HMS total api count stats and connection pool stats to metrics

2019-01-30 Thread Karthik Manamcheri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Manamcheri updated HIVE-21045:
--
Fix Version/s: 4.0.0

> Add HMS total api count stats and connection pool stats to metrics
> --
>
> Key: HIVE-21045
> URL: https://issues.apache.org/jira/browse/HIVE-21045
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21045.1.patch, HIVE-21045.2.patch, 
> HIVE-21045.3.patch, HIVE-21045.4.patch, HIVE-21045.5.patch, 
> HIVE-21045.6.patch, HIVE-21045.7.patch, HIVE-21045.branch-3.patch
>
>
> There are two key metrics which I think we lack and which would be really 
> great to help with scaling visibility in HMS.
> *Total API calls duration stats*
> We already compute and log the duration of API calls in the {{PerfLogger}}. 
> We don't have any gauge or timer on what the average duration of an API call 
> is for the past some bucket of time. This will give us an insight into if 
> there is load on the server which is increasing the average API response time.
>  
> *Connection Pool stats*
> We can use different connection pooling libraries such as bonecp or hikaricp. 
> These pool managers expose statistics such as average time waiting to get a 
> connection, number of connections active, etc. We should expose this as a 
> metric so that we can track if the the connection pool size configured is too 
> small and we are saturating!
> These metrics would help catch problems with HMS resource contention before 
> they actually have jobs failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21044) Add SLF4J reporter to the metastore metrics system

2019-01-30 Thread Karthik Manamcheri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Manamcheri updated HIVE-21044:
--
Fix Version/s: 3.2.0

> Add SLF4J reporter to the metastore metrics system
> --
>
> Key: HIVE-21044
> URL: https://issues.apache.org/jira/browse/HIVE-21044
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
>  Labels: metrics
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21044.1.patch, HIVE-21044.2.patch, 
> HIVE-21044.3.patch, HIVE-21044.4.patch, HIVE-21044.branch-3.patch
>
>
> Lets add SLF4J reporter as an option in Metrics reporting system. Currently 
> we support JMX, JSON and Console reporting.
> We will add a new option to {{hive.service.metrics.reporter}} called SLF4J. 
> We can use the 
> {{[Slf4jReporter|https://metrics.dropwizard.io/3.1.0/apidocs/com/codahale/metrics/Slf4jReporter.html]}}
>  class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21178) COLUMNS_V2[COMMENT] size different between derby db & other dbs.

2019-01-30 Thread Venu Yanamandra (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venu Yanamandra updated HIVE-21178:
---
Description: 
Based on the sql scripts present for derby db, the size of COLUMNS_V2[COMMENT] 
is 4000.

[https://github.com/apache/hive/tree/master/metastore/scripts/upgrade/derby]

 

However, if we see those present in say - mysql, we see them limited at 256.

[https://github.com/apache/hive/tree/master/metastore/scripts/upgrade/mysql]

 

For a requirement to store larger amount of comments, non-derby dbs limit the 
maximum size of the column comments.

 

Kindly review the discrepancy. 

 

 

  was:
Based on the sql scripts present for derby db, the size of COLUMNS_V2[COMMENT] 
is 4000.

[https://github.com/apache/hive/tree/master/metastore/scripts/upgrade/derby]

 

However, if we see those present in say - mysql, we see them limited at 256.

[https://github.com/apache/hive/tree/master/metastore/scripts/upgrade/mysql]

 

For a requirement to store larger amount of comments, non-derby dbs, limit the 
maximum size of the column comments.

 

Kindly review the discrepancy. 

 

 


> COLUMNS_V2[COMMENT] size different between derby db & other dbs.
> 
>
> Key: HIVE-21178
> URL: https://issues.apache.org/jira/browse/HIVE-21178
> Project: Hive
>  Issue Type: Bug
>Reporter: Venu Yanamandra
>Priority: Minor
>
> Based on the sql scripts present for derby db, the size of 
> COLUMNS_V2[COMMENT] is 4000.
> [https://github.com/apache/hive/tree/master/metastore/scripts/upgrade/derby]
>  
> However, if we see those present in say - mysql, we see them limited at 256.
> [https://github.com/apache/hive/tree/master/metastore/scripts/upgrade/mysql]
>  
> For a requirement to store larger amount of comments, non-derby dbs limit the 
> maximum size of the column comments.
>  
> Kindly review the discrepancy. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20295) Remove !isNumber check after failed constant interpretation

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756352#comment-16756352
 ] 

Hive QA commented on HIVE-20295:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
36s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 9 new + 87 unchanged - 0 fixed 
= 96 total (was 87) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} ql generated 0 new + 2300 unchanged - 4 fixed = 2300 
total (was 2304) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15845/dev-support/hive-personality.sh
 |
| git revision | master / dfc4b8e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15845/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15845/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15845/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Remove !isNumber check after failed constant interpretation
> ---
>
> Key: HIVE-20295
> URL: https://issues.apache.org/jira/browse/HIVE-20295
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-20295.01.patch, HIVE-20295.02.patch, 
> HIVE-20295.03.patch, HIVE-20295.04.patch
>
>
> During constant interpretation; if the number can't be parsed - it might be 
> possible that the comparsion is out of range for the type in question - in 
> which case it could be removed.
> https://github.com/apache/hive/blob/2cabb8da150b8fb980223fbd6c2c93b842ca3ee5/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java#L1163



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21177) Optimize AcidUtils.getLogicalLength()

2019-01-30 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756350#comment-16756350
 ] 

Eugene Koifman commented on HIVE-21177:
---

ParsedDeltaLight pd = 
ParsedDeltaLight.parse(fs.getFileStatus(baseOrDeltaDir));

{{fs.getFileStatus(baseOrDeltaDir)}} is counted - it wasn't performed before.

I removed some of the comments since they were clearly out of date (even before 
the current patch)


> Optimize AcidUtils.getLogicalLength()
> -
>
> Key: HIVE-21177
> URL: https://issues.apache.org/jira/browse/HIVE-21177
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-21177.01.patch, HIVE-21177.02.patch
>
>
> {{AcidUtils.getLogicalLength()}} - tries look for the side file 
> {{OrcAcidUtils.getSideFile()}} on the file system even when the file couldn't 
> possibly be there, e.g. when the path is delta_x_x or base_x.  It could only 
> be there in delta_x_y, x != y.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.18

2019-01-30 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756340#comment-16756340
 ] 

Hive QA commented on HIVE-21001:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956900/HIVE-21001.17.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 15721 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer8] 
(batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[druid_floor_hour] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_numeric] 
(batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_join_preds] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join34] (batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join35] (batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join45] (batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join47] (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin47] (batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[partition_wise_fileformat2]
 (batchId=94)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[rand_partitionpruner3] 
(batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[select_unquote_or] 
(batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_47] 
(batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_between] (batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_between_columns] 
(batchId=75)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_multi]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_interval_2]
 (batchId=177)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_16]
 (batchId=136)
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[1]
 (batchId=209)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15844/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15844/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15844/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 19 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956900 - PreCommit-HIVE-Build

> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001.03.patch, HIVE-21001.04.patch, 
> HIVE-21001.05.patch, HIVE-21001.06.patch, HIVE-21001.06.patch, 
> HIVE-21001.07.patch, HIVE-21001.08.patch, HIVE-21001.08.patch, 
> HIVE-21001.08.patch, HIVE-21001.09.patch, HIVE-21001.09.patch, 
> HIVE-21001.09.patch, HIVE-21001.10.patch, HIVE-21001.11.patch, 
> HIVE-21001.12.patch, HIVE-21001.13.patch, HIVE-21001.15.patch, 
> HIVE-21001.16.patch, HIVE-21001.17.patch
>
>
> CLEAR LIBRARY CACHE 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >