[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Attachment: (was: HIVE-21999-1.patch)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Status: Open  (was: Patch Available)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Attachment: HIVE-21999-1.patch
Status: Patch Available  (was: Open)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999-1.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21944) Remove unused methods, fields and variables from Vectorizer

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16889978#comment-16889978
 ] 

Hive QA commented on HIVE-21944:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975371/HIVE-21944.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16682 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18126/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18126/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18126/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975371 - PreCommit-HIVE-Build

> Remove unused methods, fields and variables from Vectorizer
> ---
>
> Key: HIVE-21944
> URL: https://issues.apache.org/jira/browse/HIVE-21944
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Trivial
> Attachments: HIVE-21944.1.patch, HIVE-21944.1.patch, 
> HIVE-21944.1.patch, HIVE-21944.1.patch
>
>
> It seems there are many unused fields, variables and methods in 
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer class. Removing them 
> would make the code easier to understand.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21944) Remove unused methods, fields and variables from Vectorizer

2019-07-22 Thread Ivan Suller (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16889980#comment-16889980
 ] 

Ivan Suller commented on HIVE-21944:


[~kgyrtkirk] passed this time. Thanks for the update.

> Remove unused methods, fields and variables from Vectorizer
> ---
>
> Key: HIVE-21944
> URL: https://issues.apache.org/jira/browse/HIVE-21944
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Trivial
> Attachments: HIVE-21944.1.patch, HIVE-21944.1.patch, 
> HIVE-21944.1.patch, HIVE-21944.1.patch
>
>
> It seems there are many unused fields, variables and methods in 
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer class. Removing them 
> would make the code easier to understand.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22009) CTLV with user specified location is not honoured

2019-07-22 Thread Naresh P R (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naresh P R updated HIVE-22009:
--
Attachment: HIVE-22009.2.patch

> CTLV with user specified location is not honoured 
> --
>
> Key: HIVE-22009
> URL: https://issues.apache.org/jira/browse/HIVE-22009
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-22009-branch-3.1.patch, HIVE-22009.1.patch, 
> HIVE-22009.2.patch, HIVE-22009.patch
>
>
> Steps to repro :
>  
> {code:java}
> CREATE TABLE emp_table (id int, name string, salary int);
> insert into emp_table values(1,'a',2);
> CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1;
> CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION 
> '/tmp/emp_ext_table';
> show create table emp_ext_table;{code}
>  
> {code:java}
> ++
> | createtab_stmt |
> ++
> | CREATE EXTERNAL TABLE `emp_ext_table`( |
> | `id` int, |
> | `name` string, |
> | `salary` int) |
> | ROW FORMAT SERDE |
> | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' |
> | STORED AS INPUTFORMAT |
> | 'org.apache.hadoop.mapred.TextInputFormat' |
> | OUTPUTFORMAT |
> | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
> | LOCATION |
> | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' |
> | TBLPROPERTIES ( |
> | 'bucketing_version'='2', |
> | 'transient_lastDdlTime'='1563467962') |
> ++{code}
> Table Location is not '/tmp/emp_ext_table', instead location is set to 
> default warehouse path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22030) Bumping jackson version to 2.9.9 and 2.9.9.1 (jackson-databind)

2019-07-22 Thread Dombi Akos (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dombi Akos reassigned HIVE-22030:
-


> Bumping jackson version to 2.9.9 and 2.9.9.1 (jackson-databind)
> ---
>
> Key: HIVE-22030
> URL: https://issues.apache.org/jira/browse/HIVE-22030
> Project: Hive
>  Issue Type: Task
>Reporter: Dombi Akos
>Assignee: Dombi Akos
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22030) Bumping jackson version to 2.9.9 and 2.9.9.1 (jackson-databind)

2019-07-22 Thread Dombi Akos (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dombi Akos updated HIVE-22030:
--
Description: 
Bump the following jackson versions:
 - jackson version to 2.9.9
 - jackson-databind version to 2.9.9.1

> Bumping jackson version to 2.9.9 and 2.9.9.1 (jackson-databind)
> ---
>
> Key: HIVE-22030
> URL: https://issues.apache.org/jira/browse/HIVE-22030
> Project: Hive
>  Issue Type: Task
>Reporter: Dombi Akos
>Assignee: Dombi Akos
>Priority: Major
> Fix For: 4.0.0
>
>
> Bump the following jackson versions:
>  - jackson version to 2.9.9
>  - jackson-databind version to 2.9.9.1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Affects Version/s: 3.0.0

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999-1.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22009) CTLV with user specified location is not honoured

2019-07-22 Thread Naresh P R (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naresh P R updated HIVE-22009:
--
Attachment: HIVE-22009.1-branch-3.1.patch

> CTLV with user specified location is not honoured 
> --
>
> Key: HIVE-22009
> URL: https://issues.apache.org/jira/browse/HIVE-22009
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-22009-branch-3.1.patch, 
> HIVE-22009.1-branch-3.1.patch, HIVE-22009.1.patch, HIVE-22009.2.patch, 
> HIVE-22009.patch
>
>
> Steps to repro :
>  
> {code:java}
> CREATE TABLE emp_table (id int, name string, salary int);
> insert into emp_table values(1,'a',2);
> CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1;
> CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION 
> '/tmp/emp_ext_table';
> show create table emp_ext_table;{code}
>  
> {code:java}
> ++
> | createtab_stmt |
> ++
> | CREATE EXTERNAL TABLE `emp_ext_table`( |
> | `id` int, |
> | `name` string, |
> | `salary` int) |
> | ROW FORMAT SERDE |
> | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' |
> | STORED AS INPUTFORMAT |
> | 'org.apache.hadoop.mapred.TextInputFormat' |
> | OUTPUTFORMAT |
> | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
> | LOCATION |
> | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' |
> | TBLPROPERTIES ( |
> | 'bucketing_version'='2', |
> | 'transient_lastDdlTime'='1563467962') |
> ++{code}
> Table Location is not '/tmp/emp_ext_table', instead location is set to 
> default warehouse path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21962) Replacing ArrayList params with List in and around PlanUtils and MapWork

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890052#comment-16890052
 ] 

Hive QA commented on HIVE-21962:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
6s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} ql: The patch generated 0 new + 583 unchanged - 1 
fixed = 583 total (was 584) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18127/dev-support/hive-personality.sh
 |
| git revision | master / ac78f79 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18127/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Replacing ArrayList params with List in and around PlanUtils and MapWork
> 
>
> Key: HIVE-21962
> URL: https://issues.apache.org/jira/browse/HIVE-21962
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Minor
> Attachments: HIVE-21962.1.patch, HIVE-21962.1.patch, 
> HIVE-21962.2.patch, HIVE-21962.2.patch
>
>
> Using the implementing class is usually a bad practice. OO suggest to use the 
> least restrictive interface instead. ArrayList is used in many-many methods 
> as a parameter - this is just a tiny part of this work.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Status: In Progress  (was: Patch Available)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Attachment: (was: HIVE-21999-1.patch)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Attachment: HIVE-21777.patch
Status: Patch Available  (was: In Progress)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21777.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Attachment: (was: HIVE-21777.patch)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Attachment: HIVE-21999.patch
Status: Patch Available  (was: Open)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Status: Open  (was: Patch Available)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21962) Replacing ArrayList params with List in and around PlanUtils and MapWork

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890071#comment-16890071
 ] 

Hive QA commented on HIVE-21962:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975370/HIVE-21962.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16682 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18127/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18127/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18127/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975370 - PreCommit-HIVE-Build

> Replacing ArrayList params with List in and around PlanUtils and MapWork
> 
>
> Key: HIVE-21962
> URL: https://issues.apache.org/jira/browse/HIVE-21962
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Minor
> Attachments: HIVE-21962.1.patch, HIVE-21962.1.patch, 
> HIVE-21962.2.patch, HIVE-21962.2.patch
>
>
> Using the implementing class is usually a bad practice. OO suggest to use the 
> least restrictive interface instead. ArrayList is used in many-many methods 
> as a parameter - this is just a tiny part of this work.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21960) HMS tasks on replica

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890074#comment-16890074
 ] 

Hive QA commented on HIVE-21960:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975375/Replication%20and%20House%20keeping%20tasks.pdf

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18128/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18128/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18128/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-07-22 10:59:29.555
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-18128/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-07-22 10:59:29.558
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at ac78f79 HIVE-21711: Regression caused by HIVE-21279 for 
blobstorage fs (Vineet Garg, reviewed by Prasanth Jayachandran)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at ac78f79 HIVE-21711: Regression caused by HIVE-21279 for 
blobstorage fs (Vineet Garg, reviewed by Prasanth Jayachandran)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-07-22 10:59:30.759
+ rm -rf ../yetus_PreCommit-HIVE-Build-18128
+ mkdir ../yetus_PreCommit-HIVE-Build-18128
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-18128
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-18128/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
fatal: unrecognized input
fatal: unrecognized input
fatal: unrecognized input
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-18128
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975375 - PreCommit-HIVE-Build

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21960.01.patch, Replication and House keeping 
> tasks.pdf
>
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Attachment: (was: HIVE-21999.patch)

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aron Hamvas updated HIVE-21999:
---
Attachment: HIVE-21999.patch

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890085#comment-16890085
 ] 

Hive QA commented on HIVE-21999:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18129/dev-support/hive-personality.sh
 |
| git revision | master / ac78f79 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18129/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21960) HMS tasks on replica

2019-07-22 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21960:
--
Status: In Progress  (was: Patch Available)

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21960.01.patch, Replication and House keeping 
> tasks.pdf
>
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21960) HMS tasks on replica

2019-07-22 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-21960:
--
Attachment: HIVE-21960.02.patch
Status: Patch Available  (was: In Progress)

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21960.01.patch, HIVE-21960.02.patch, Replication 
> and House keeping tasks.pdf
>
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890112#comment-16890112
 ] 

Hive QA commented on HIVE-21999:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975394/HIVE-21999.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16682 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18129/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18129/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18129/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975394 - PreCommit-HIVE-Build

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22009) CTLV with user specified location is not honoured

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890129#comment-16890129
 ] 

Hive QA commented on HIVE-22009:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 19s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-18130/patches/PreCommit-HIVE-Build-18130.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18130/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CTLV with user specified location is not honoured 
> --
>
> Key: HIVE-22009
> URL: https://issues.apache.org/jira/browse/HIVE-22009
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-22009-branch-3.1.patch, 
> HIVE-22009.1-branch-3.1.patch, HIVE-22009.1.patch, HIVE-22009.2.patch, 
> HIVE-22009.patch
>
>
> Steps to repro :
>  
> {code:java}
> CREATE TABLE emp_table (id int, name string, salary int);
> insert into emp_table values(1,'a',2);
> CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1;
> CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION 
> '/tmp/emp_ext_table';
> show create table emp_ext_table;{code}
>  
> {code:java}
> ++
> | createtab_stmt |
> ++
> | CREATE EXTERNAL TABLE `emp_ext_table`( |
> | `id` int, |
> | `name` string, |
> | `salary` int) |
> | ROW FORMAT SERDE |
> | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' |
> | STORED AS INPUTFORMAT |
> | 'org.apache.hadoop.mapred.TextInputFormat' |
> | OUTPUTFORMAT |
> | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
> | LOCATION |
> | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' |
> | TBLPROPERTIES ( |
> | 'bucketing_version'='2', |
> | 'transient_lastDdlTime'='1563467962') |
> ++{code}
> Table Location is not '/tmp/emp_ext_table', instead location is set to 
> default warehouse path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Aron Hamvas (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890138#comment-16890138
 ] 

Aron Hamvas commented on HIVE-21999:


Review board link: 

https://reviews.apache.org/r/71134/

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi reassigned HIVE-22031:
-


> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Minor
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.j

[jira] [Commented] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890165#comment-16890165
 ] 

Artem Velykorodnyi commented on HIVE-22031:
---

Code is added in the scope of HIVE-15192. [~vgarg] can you please take a look.

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Minor
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.ref

[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Attachment: HIVE-22031.patch

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Minor
> Attachments: HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces

[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Attachment: (was: HIVE-22031.patch)

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Minor
> Attachments: HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(Delegating

[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Attachment: HIVE-22031.patch
Status: Patch Available  (was: Open)

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Minor
> Attachments: HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMe

[jira] [Commented] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890172#comment-16890172
 ] 

Peter Vary commented on HIVE-21999:
---

+1

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Priority: Major  (was: Minor)

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce

[jira] [Commented] (HIVE-22009) CTLV with user specified location is not honoured

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890183#comment-16890183
 ] 

Hive QA commented on HIVE-22009:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975388/HIVE-22009.1-branch-3.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 119 failed/errored test(s), 14410 tests 
executed
*Failed tests:*
{noformat}
TestAddPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestAddPartitionsFromPartSpec - did not produce a TEST-*.xml file (likely timed 
out) (batchId=228)
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
TestAggregateStatsCache - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
TestAlterPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestAppendPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=272)
TestCachedStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestCatalogCaching - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestCatalogNonDefaultClient - did not produce a TEST-*.xml file (likely timed 
out) (batchId=226)
TestCatalogNonDefaultSvr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
TestCatalogOldClient - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestCatalogs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestCheckConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestCloseableThreadLocal - did not produce a TEST-*.xml file (likely timed out) 
(batchId=330)
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=236)
TestDatabaseName - did not produce a TEST-*.xml file (likely timed out) 
(batchId=195)
TestDatabases - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDeadline - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestDefaultConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDropPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=272)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=229)
TestExchangePartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestFMSketchSerialization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=237)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestForeignKey - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestFunctions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestGetPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestGetPartitionsUsingProjectionAndFilterSpecs - did not produce a TEST-*.xml 
file (likely timed out) (batchId=228)
TestGetTableMeta - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestHLLNoBias - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestHLLSerialization - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestHdfsUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=232)
TestHiveAlterHandler - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestHiveMetaStoreGetMetaConf - did not produce a TEST-*.xml file (likely timed 
out) (batchId=236)
TestHiveMetaStorePartitionSpecs - did not produce a TEST-*.xml file (likely 
timed out) (batchId=228)
TestHiveMetaStoreSchemaMethods - did not produce a TEST-*.xml file (likely 
timed out) (batchId=236)
TestHiveMetaStoreTimeout - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=231)
TestHiveMetaToolCommandLine - did not produce a TEST-*.xml file (likely timed 
out) (batchId=232)
TestHiveMetastoreCli - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestHmsServerAuthorization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=232)
TestHyperLogLog - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestHyperLogLogDense - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestHyperLogLogMerge - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestHyperLogLogSparse - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestJSONMessageDeser

[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Status: Open  (was: Patch Available)

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMet

[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Attachment: HIVE-22031.1.patch
Status: Patch Available  (was: Open)

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.1.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> su

[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Status: Open  (was: Patch Available)

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.Delegati

[jira] [Updated] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Artem Velykorodnyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Velykorodnyi updated HIVE-22031:
--
Attachment: HIVE-22031.02.patch
Status: Patch Available  (was: Open)

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav

[jira] [Commented] (HIVE-22009) CTLV with user specified location is not honoured

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890227#comment-16890227
 ] 

Hive QA commented on HIVE-22009:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975388/HIVE-22009.1-branch-3.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18131/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18131/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18131/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12975388/HIVE-22009.1-branch-3.1.patch
 was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975388 - PreCommit-HIVE-Build

> CTLV with user specified location is not honoured 
> --
>
> Key: HIVE-22009
> URL: https://issues.apache.org/jira/browse/HIVE-22009
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-22009-branch-3.1.patch, 
> HIVE-22009.1-branch-3.1.patch, HIVE-22009.1.patch, HIVE-22009.2.patch, 
> HIVE-22009.patch
>
>
> Steps to repro :
>  
> {code:java}
> CREATE TABLE emp_table (id int, name string, salary int);
> insert into emp_table values(1,'a',2);
> CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1;
> CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION 
> '/tmp/emp_ext_table';
> show create table emp_ext_table;{code}
>  
> {code:java}
> ++
> | createtab_stmt |
> ++
> | CREATE EXTERNAL TABLE `emp_ext_table`( |
> | `id` int, |
> | `name` string, |
> | `salary` int) |
> | ROW FORMAT SERDE |
> | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' |
> | STORED AS INPUTFORMAT |
> | 'org.apache.hadoop.mapred.TextInputFormat' |
> | OUTPUTFORMAT |
> | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
> | LOCATION |
> | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' |
> | TBLPROPERTIES ( |
> | 'bucketing_version'='2', |
> | 'transient_lastDdlTime'='1563467962') |
> ++{code}
> Table Location is not '/tmp/emp_ext_table', instead location is set to 
> default warehouse path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890263#comment-16890263
 ] 

Hive QA commented on HIVE-21999:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18132/dev-support/hive-personality.sh
 |
| git revision | master / ac78f79 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18132/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22009) CTLV with user specified location is not honoured

2019-07-22 Thread Naresh P R (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naresh P R updated HIVE-22009:
--
Attachment: HIVE-22009.3.patch

> CTLV with user specified location is not honoured 
> --
>
> Key: HIVE-22009
> URL: https://issues.apache.org/jira/browse/HIVE-22009
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-22009-branch-3.1.patch, 
> HIVE-22009.1-branch-3.1.patch, HIVE-22009.1.patch, HIVE-22009.2.patch, 
> HIVE-22009.3.patch, HIVE-22009.patch
>
>
> Steps to repro :
>  
> {code:java}
> CREATE TABLE emp_table (id int, name string, salary int);
> insert into emp_table values(1,'a',2);
> CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1;
> CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION 
> '/tmp/emp_ext_table';
> show create table emp_ext_table;{code}
>  
> {code:java}
> ++
> | createtab_stmt |
> ++
> | CREATE EXTERNAL TABLE `emp_ext_table`( |
> | `id` int, |
> | `name` string, |
> | `salary` int) |
> | ROW FORMAT SERDE |
> | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' |
> | STORED AS INPUTFORMAT |
> | 'org.apache.hadoop.mapred.TextInputFormat' |
> | OUTPUTFORMAT |
> | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
> | LOCATION |
> | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' |
> | TBLPROPERTIES ( |
> | 'bucketing_version'='2', |
> | 'transient_lastDdlTime'='1563467962') |
> ++{code}
> Table Location is not '/tmp/emp_ext_table', instead location is set to 
> default warehouse path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890297#comment-16890297
 ] 

Hive QA commented on HIVE-21999:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975397/HIVE-21999.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16682 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18132/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18132/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18132/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975397 - PreCommit-HIVE-Build

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-21921) Support for correlated quantified predicates

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21921?focusedWorklogId=280516&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280516
 ]

ASF GitHub Bot logged work on HIVE-21921:
-

Author: ASF GitHub Bot
Created on: 22/Jul/19 17:06
Start Date: 22/Jul/19 17:06
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #693: 
HIVE-21921: Support for correlated quantified predicates
URL: https://github.com/apache/hive/pull/693
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280516)
Time Spent: 1h 20m  (was: 1h 10m)

> Support for correlated quantified predicates
> 
>
> Key: HIVE-21921
> URL: https://issues.apache.org/jira/browse/HIVE-21921
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21921.1.patch, HIVE-21921.2.patch, 
> HIVE-21921.3.patch, HIVE-21921.4.patch, HIVE-21921.5.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21999) Add sensitive ABFS configuration properties to HiveConf hidden list

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890374#comment-16890374
 ] 

Hive QA commented on HIVE-21999:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975397/HIVE-21999.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18133/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18133/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18133/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12975397/HIVE-21999.patch was 
found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975397 - PreCommit-HIVE-Build

> Add sensitive ABFS configuration properties to HiveConf hidden list
> ---
>
> Key: HIVE-21999
> URL: https://issues.apache.org/jira/browse/HIVE-21999
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Aron Hamvas
>Assignee: Aron Hamvas
>Priority: Major
> Attachments: HIVE-21999.patch
>
>
> We need to make sure that sensitive information such as ABFS credentials are 
> not logged.
> Add "fs.azure.account.oauth2.client.secret" to hidden list in HiveConf



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-12971) Hive Support for Kudu

2019-07-22 Thread Grant Henke (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated HIVE-12971:
---
Attachment: HIVE-12971.3.patch

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-22 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890380#comment-16890380
 ] 

Vineet Garg commented on HIVE-21991:


Multiple tests in {{TestInputOutputFormat}} are failing due to less number of 
read operations (specifically read ops are reduced by 2 now).

 {{testACIDReaderNoFooterSerialize}} is doing the following:
 * Create 2 files and write data to it.
 * Call getSplits on the dir. Number of read ops here are as before (3)
 * Call getRecordReader on each split and check the number of read ops.

This test expects number of read ops to be 8 for the last call 
(getRecordReader) for all splits
{noformat}
// call-1: open to read footer - split 1 => mock:/mocktable5/0_0
// call-2: open to read data - split 1 => mock:/mocktable5/0_0
// call-3: getAcidState - split 1 => mock:/mocktable5 (to compute offset 
for original read)
// call-4: open to read footer - split 2 => mock:/mocktable5/0_1
// call-5: open to read data - split 2 => mock:/mocktable5/0_1
// call-6: getAcidState - split 2 => mock:/mocktable5 (to compute offset 
for original read)
// call-7: open to read footer - split 2 => mock:/mocktable5/0_0 (to get 
row count)
// call-8: file status - split 2 => mock:/mocktable5/0_0
{noformat}
But number of read ops are 6 instead of 8.

I don't understand call-7 and call-8 and 6 read ops make sense to me. So either 
this was a bug before and is now fixed or number of read ops are wrong now.

[~prasanth_j] [~gopalv] [~bslim] Can you please help me understand this? How 
can I debug this further to find out what exact read operations are being done? 
Are there trace logs which I can turn on?

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-12971) Hive Support for Kudu

2019-07-22 Thread Grant Henke (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890382#comment-16890382
 ] 

Grant Henke commented on HIVE-12971:


The latest patch, version 3, has good qtest coverage and should be ready for 
review. 

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21960) HMS tasks on replica

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890419#comment-16890419
 ] 

Hive QA commented on HIVE-21960:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
29s{color} | {color:blue} standalone-metastore/metastore-common in master has 
31 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
15s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch metastore-common passed checkstyle {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} standalone-metastore/metastore-server: The patch 
generated 0 new + 50 unchanged - 1 fixed = 50 total (was 51) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 6 new + 68 unchanged - 1 fixed 
= 74 total (was 69) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} itests/hive-unit: The patch generated 39 new + 237 
unchanged - 12 fixed = 276 total (was 249) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18134/dev-support/hive-personality.sh
 |
| git revision | master / ac78f79 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18134/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18134/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18134/yetus.txt |
|

[jira] [Commented] (HIVE-21960) HMS tasks on replica

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890445#comment-16890445
 ] 

Hive QA commented on HIVE-21960:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975399/HIVE-21960.02.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 16686 tests 
executed
*Failed tests:*
{noformat}
TestReplicationWithStatsUpdaterTask - did not produce a TEST-*.xml file (likely 
timed out) (batchId=273)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_1_drop] (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_2_exim_basic] 
(batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_3_exim_metadata] 
(batchId=63)
org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testBootStrapDumpOfWarehouse
 (batchId=259)
org.apache.hadoop.hive.ql.parse.TestReplTableMigrationWithJsonFormat.testIncrementalLoadMigrationManagedToAcidAllOp
 (batchId=265)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcrossInstances.testBootStrapDumpOfWarehouse
 (batchId=263)
org.apache.hadoop.hive.ql.parse.TestReplicationWithTableMigration.testIncrementalLoadMigrationManagedToAcidAllOp
 (batchId=254)
org.apache.hadoop.hive.ql.parse.TestStatsReplicationScenariosMigration.testForParallelBootstrapLoad
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestStatsReplicationScenariosMigration.testNonParallelBootstrapLoad
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestStatsReplicationScenariosMigration.testRetryFailure
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestStatsReplicationScenariosMigrationNoAutogather.testForParallelBootstrapLoad
 (batchId=266)
org.apache.hadoop.hive.ql.parse.TestStatsReplicationScenariosMigrationNoAutogather.testNonParallelBootstrapLoad
 (batchId=266)
org.apache.hadoop.hive.ql.parse.TestStatsReplicationScenariosMigrationNoAutogather.testRetryFailure
 (batchId=266)
org.apache.hive.hcatalog.api.repl.commands.TestCommands.testBasicReplEximCommands
 (batchId=206)
org.apache.hive.hcatalog.api.repl.commands.TestCommands.testMetadataReplEximCommands
 (batchId=206)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=283)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=283)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomNonExistent
 (batchId=283)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesRead 
(batchId=283)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=283)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryElapsedTime
 (batchId=283)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryExecutionTime
 (batchId=283)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18134/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18134/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18134/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 23 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975399 - PreCommit-HIVE-Build

> HMS tasks on replica
> 
>
> Key: HIVE-21960
> URL: https://issues.apache.org/jira/browse/HIVE-21960
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, repl
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
> Attachments: HIVE-21960.01.patch, HIVE-21960.02.patch, Replication 
> and House keeping tasks.pdf
>
>
> An HMS performs a number of housekeeping tasks. Assess whether
>  # They are required to be performed in the replicated data
>  # Performing those on replicated data causes any issues and how to fix those.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-22 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890461#comment-16890461
 ] 

Gopal V commented on HIVE-21991:


I think this might be related to - 
https://github.com/apache/orc/commit/6d3943eb456985a00973dc2e94ad3b3389c4ba05

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890467#comment-16890467
 ] 

Hive QA commented on HIVE-22031:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
17s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18135/dev-support/hive-personality.sh
 |
| git revision | master / ac78f79 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18135/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optim

[jira] [Commented] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-22 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890489#comment-16890489
 ] 

Vineet Garg commented on HIVE-21991:


[~gopalv] This is exactly it! Thanks. I confirmed that {{readStripeFooter}} is 
no longer opening the same file.

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890492#comment-16890492
 ] 

Hive QA commented on HIVE-22031:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975412/HIVE-22031.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18136/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18136/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18136/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12975412/HIVE-22031.02.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975412 - PreCommit-HIVE-Build

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.

[jira] [Commented] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890493#comment-16890493
 ] 

Hive QA commented on HIVE-22031:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975412/HIVE-22031.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18137/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18137/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18137/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12975412/HIVE-22031.02.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975412 - PreCommit-HIVE-Build

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM INT, CUST_CODE STRING);
> 2. Create table customers
> create table customers (CUST_CODE STRING);
> 3. Make select with constants and with a subquery:
> select DISTINCT(CUST_CODE), '777' as ANY, ORD_NUM, '888' as CONSTANT
> from orders 
> WHERE not exists 
> (select 1 
> from customers 
> WHERE CUST_CODE=orders.CUST_CODE
> );
> {code}
> Query fails with IndexOutOfBoundsException
> {code}
> Exception in thread "main" java.lang.AssertionError: Internal error: While 
> invoking method 'public 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator$Frame 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateRel(org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveProject)
>  throws org.apache.hadoop.hive.ql.parse.SemanticException'
>   at org.apache.calcite.util.Util.newInternal(Util.java:792)
>   at org.apache.calcite.util.ReflectUtil$2.invoke(ReflectUtil.java:534)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.getInvoke(HiveRelDecorrelator.java:660)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelate(HiveRelDecorrelator.java:252)
>   at 
> org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRelDecorrelator.decorrelateQuery(HiveRelDecorrelator.java:218)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1347)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1261)
>   at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:113)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:997)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:149)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.

[jira] [Commented] (HIVE-22031) HiveRelDecorrelator fails with IndexOutOfBoundsException if the query contains several "constant" columns

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890491#comment-16890491
 ] 

Hive QA commented on HIVE-22031:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975412/HIVE-22031.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 16682 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.TestTxnCommands2.testACIDwithSchemaEvolutionAndCompaction
 (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testAcidWithSchemaEvolution 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testAlterTable (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testBucketCodec (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testBucketizedInputFormat 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testCleanerForTxnToWriteId 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testCompactWithDelete (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testDeleteEventsCompaction 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testDeleteIn (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testDynamicPartitionsMerge 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testDynamicPartitionsMerge2 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testETLSplitStrategyForACID 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testEmptyInTblproperties 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testFailHeartbeater (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testFailureOnAlteringTransactionalProperties
 (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testFileSystemUnCaching (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testInitiatorWithMultipleFailedCompactions
 (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testInsertOverwrite1 (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testInsertOverwrite2 (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testInsertOverwriteWithSelfJoin 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testMerge (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testMerge2 (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testMerge3 (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testMergeWithPredicate (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testMmTableCompaction (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testMultiInsert (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testMultiInsertStatement 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidInsert (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion02 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testOpenTxnsCounter (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testOrcNoPPD (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testOrcPPD (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testOriginalFileReaderWhenNonAcidConvertedToAcid
 (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testUpdateMixedCase (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.testValidTxnsBookkeeping 
(batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.updateDeletePartitioned (batchId=330)
org.apache.hadoop.hive.ql.TestTxnCommands2.writeBetweenWorkerAndCleaner 
(batchId=330)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18135/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18135/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18135/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 37 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975412 - PreCommit-HIVE-Build

> HiveRelDecorrelator fails with IndexOutOfBoundsException if the query 
> contains several "constant" columns
> -
>
> Key: HIVE-22031
> URL: https://issues.apache.org/jira/browse/HIVE-22031
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5
>Reporter: Artem Velykorodnyi
>Assignee: Artem Velykorodnyi
>Priority: Major
> Attachments: HIVE-22031.02.patch, HIVE-22031.1.patch, HIVE-22031.patch
>
>
> Steps for reproducing:
> {code}
> 1. Create table orders
> create table orders (ORD_NUM

[jira] [Updated] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-22 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21991:
---
Status: Patch Available  (was: Open)

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch, 
> HIVE-21991.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-22 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21991:
---
Status: Open  (was: Patch Available)

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch, 
> HIVE-21991.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-22 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21991:
---
Attachment: HIVE-21991.3.patch

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch, 
> HIVE-21991.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22009) CTLV with user specified location is not honoured

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890510#comment-16890510
 ] 

Hive QA commented on HIVE-22009:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
4s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18138/dev-support/hive-personality.sh
 |
| git revision | master / ac78f79 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18138/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CTLV with user specified location is not honoured 
> --
>
> Key: HIVE-22009
> URL: https://issues.apache.org/jira/browse/HIVE-22009
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-22009-branch-3.1.patch, 
> HIVE-22009.1-branch-3.1.patch, HIVE-22009.1.patch, HIVE-22009.2.patch, 
> HIVE-22009.3.patch, HIVE-22009.patch
>
>
> Steps to repro :
>  
> {code:java}
> CREATE TABLE emp_table (id int, name string, salary int);
> insert into emp_table values(1,'a',2);
> CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1;
> CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION 
> '/tmp/emp_ext_table';
> show create table emp_ext_table;{code}
>  
> {code:java}
> ++
> | createtab_stmt |
> ++
> | CREATE EXTERNAL TABLE `emp_ext_table`( |
> | `id` int, |
> | `name` string, |
> | `salary` int) |
> | ROW FORMAT SERDE |
> | 'org.apache.hadoop.hive.serde2.la

[jira] [Commented] (HIVE-20726) The hive.server2.thrift.bind.host is not honored after changing in Hive 2.3.3

2019-07-22 Thread Chance Zibolski (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890521#comment-16890521
 ] 

Chance Zibolski commented on HIVE-20726:


I've diagnosed the issue to 
[https://github.com/apache/hive/blob/rel/release-2.3.5/service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java#L168]

The problem is this declaration of hiveHost shadows the class variable hiveHost.

Additionally, I believe i found a potential problem in 
[https://github.com/apache/hive/blob/rel/release-2.3.5/service/src/java/org/apache/hive/service/server/HiveServer2.java#L145]
 because this overrides the bind host value. It seems this only exists to 
enable zookeeper discovery, but a better approach would be to have a second 
option for the hostname to publish, rather than using bindhost for it.

> The hive.server2.thrift.bind.host is not honored after changing in Hive 2.3.3
> -
>
> Key: HIVE-20726
> URL: https://issues.apache.org/jira/browse/HIVE-20726
> Project: Hive
>  Issue Type: Bug
> Environment: AWS Centos 7 machine with custom installed Hive 
> 2.3.3/Hadoop 2.9.0 and Tez 0.8.5
>Reporter: Vignesh Sankaran
>Priority: Critical
>
> The hive.server2.thrift.bind.host property was set to a custom IP where Hive 
> 2.3.3 is setup and also the hive.server2.transport.mode=binary was set. The 
> change is not at all being honored and the server starts to listen on 
> 0.0.0.0:10002.
>  
> I am unable to connect to the remote because of this using the JDBC client 
> DBeaver.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-20726) The hive.server2.thrift.bind.host is not honored after changing in Hive 2.3.3

2019-07-22 Thread Chance Zibolski (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890524#comment-16890524
 ] 

Chance Zibolski commented on HIVE-20726:


Seems that the change was introduced in 
[https://github.com/apache/hive/commit/9f57569b0f648bb5596df60e0a62db06930778ea],
 so apparently all the back in 2015.

> The hive.server2.thrift.bind.host is not honored after changing in Hive 2.3.3
> -
>
> Key: HIVE-20726
> URL: https://issues.apache.org/jira/browse/HIVE-20726
> Project: Hive
>  Issue Type: Bug
> Environment: AWS Centos 7 machine with custom installed Hive 
> 2.3.3/Hadoop 2.9.0 and Tez 0.8.5
>Reporter: Vignesh Sankaran
>Priority: Critical
>
> The hive.server2.thrift.bind.host property was set to a custom IP where Hive 
> 2.3.3 is setup and also the hive.server2.transport.mode=binary was set. The 
> change is not at all being honored and the server starts to listen on 
> 0.0.0.0:10002.
>  
> I am unable to connect to the remote because of this using the JDBC client 
> DBeaver.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21935) Hive Vectorization : degraded performance with vectorize UDF

2019-07-22 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890525#comment-16890525
 ] 

Gopal V commented on HIVE-21935:


This is an indirect side-effect of vectorizing ListColumnVector - not directly 
the UDF

> Hive Vectorization : degraded performance with vectorize UDF  
> --
>
> Key: HIVE-21935
> URL: https://issues.apache.org/jira/browse/HIVE-21935
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 3.1.1
> Environment: Hive-3, JDK-8
>Reporter: Rajkumar Singh
>Priority: Major
>  Labels: performance
> Attachments: CustomSplit-1.0-SNAPSHOT.jar
>
>
> with vectorization turned on and hive.vectorized.adaptor.usage.mode=all we 
> were seeing severe performance degradation. looking at the task jstacks it 
> seems that it is running the code which vectorizes UDF and stuck in some loop.
> {code:java}
> jstack -l 14954 | grep 0x3af0 -A20
> "TezChild" #15 daemon prio=5 os_prio=0 tid=0x7f157538d800 nid=0x3af0 
> runnable [0x7f1547581000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorAssignRow.assignRowColumn(VectorAssignRow.java:573)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorAssignRow.assignRowColumn(VectorAssignRow.java:350)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:205)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:150)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression.evaluateChildren(VectorExpression.java:271)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.ListIndexColScalar.evaluate(ListIndexColScalar.java:59)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:889)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> [yarn@hdp32b ~]$ jstack -l 14954 | grep 0x3af0 -A20
> "TezChild" #15 daemon prio=5 os_prio=0 tid=0x7f157538d800 nid=0x3af0 
> runnable [0x7f1547581000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.ensureSize(BytesColumnVector.java:554)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorAssignRow.assignRowColumn(VectorAssignRow.java:570)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorAssignRow.assignRowColumn(VectorAssignRow.java:350)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:205)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:150)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression.evaluateChildren(VectorExpression.java:271)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.ListIndexColScalar.evaluate(ListIndexColScalar.java:59)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:889)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76)
>   at 
> org.apache.had

[jira] [Commented] (HIVE-20726) The hive.server2.thrift.bind.host is not honored after changing in Hive 2.3.3

2019-07-22 Thread Chance Zibolski (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890526#comment-16890526
 ] 

Chance Zibolski commented on HIVE-20726:


So that's introduced via HIVE-12568

> The hive.server2.thrift.bind.host is not honored after changing in Hive 2.3.3
> -
>
> Key: HIVE-20726
> URL: https://issues.apache.org/jira/browse/HIVE-20726
> Project: Hive
>  Issue Type: Bug
> Environment: AWS Centos 7 machine with custom installed Hive 
> 2.3.3/Hadoop 2.9.0 and Tez 0.8.5
>Reporter: Vignesh Sankaran
>Priority: Critical
>
> The hive.server2.thrift.bind.host property was set to a custom IP where Hive 
> 2.3.3 is setup and also the hive.server2.transport.mode=binary was set. The 
> change is not at all being honored and the server starts to listen on 
> 0.0.0.0:10002.
>  
> I am unable to connect to the remote because of this using the JDBC client 
> DBeaver.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?focusedWorklogId=280685&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280685
 ]

ASF GitHub Bot logged work on HIVE-22003:
-

Author: ASF GitHub Bot
Created on: 22/Jul/19 23:05
Start Date: 22/Jul/19 23:05
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #729: HIVE-22003
URL: https://github.com/apache/hive/pull/729#discussion_r306068752
 
 

 ##
 File path: 
ql/src/test/results/clientpositive/perf/tez/constraints/query32.q.out
 ##
 @@ -115,7 +115,7 @@ Stage-0
   Select Operator [SEL_114] (rows=285116600 
width=119)
 Output:["_col0","_col1","_col2"]
 Filter Operator [FIL_112] (rows=285116600 
width=119)
-  predicate:(cs_ext_discount_amt is not null 
and cs_sold_date_sk is not null)
+  predicate:(cs_ext_discount_amt is not null 
and cs_sold_date_sk is not null and cs_item_sk BETWEEN 
DynamicValue(RS_28_item_i_item_sk_min) AND 
DynamicValue(RS_28_item_i_item_sk_max) and in_bloom_filter(cs_item_sk, 
DynamicValue(RS_28_item_i_item_sk_bloom_filter)))
 
 Review comment:
   Do you know if this was leading to wrong results or if the filter was 
ignored silently?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280685)
Time Spent: 20m  (was: 10m)

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22003:
--
Labels: pull-request-available  (was: )

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?focusedWorklogId=280684&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280684
 ]

ASF GitHub Bot logged work on HIVE-22003:
-

Author: ASF GitHub Bot
Created on: 22/Jul/19 23:05
Start Date: 22/Jul/19 23:05
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #729: HIVE-22003
URL: https://github.com/apache/hive/pull/729#discussion_r306066881
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java
 ##
 @@ -486,6 +510,75 @@ private static boolean 
sharedWorkOptimization(ParseContext pctx, SharedWorkOptim
 return mergedExecuted;
   }
 
+  private static void replaceSemijoinExpressions(TableScanOperator tsOp, 
List semijoinExprNodes) {
 
 Review comment:
   If I understand this correctly this is suppose to remove all semijoin 
related expressions from TS as well as its children filter op. Why is this 
method taking `retainableTsOp's` `semijoinExprNodes`?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280684)
Time Spent: 10m
Remaining Estimate: 0h

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22028) Clean up Add Partition

2019-07-22 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22028:
--
Attachment: HIVE-22028.02.patch

> Clean up Add Partition
> --
>
> Key: HIVE-22028
> URL: https://issues.apache.org/jira/browse/HIVE-22028
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22028.01.patch, HIVE-22028.02.patch
>
>
> AlterTableAddPartitionDesc should be immutable, like the rest of the desc 
> classes. This can not be done 100% right now, as it requires the refactoring 
> of the ImportSemanticAnalyzer, so the task will be finished only then.
> Add Partition logic should be moved from Hive.java to 
> AlterTableAddPartitionOperation.java, only the metastore calls should remain 
> in Hive.java.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22028) Clean up Add Partition

2019-07-22 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22028:
--
Description: 
AlterTableAddPartitionDesc should be immutable, like the rest of the desc 
classes. This can not be done 100% right now, as it requires the refactoring of 
the ImportSemanticAnalyzer, so the task will be finished only then.

Add Partition logic should be moved from Hive.java to 
AlterTableAddPartitionOperation.java, only the metastore calls should remain in 
Hive.java.

  was:AlterTableAddPartitionDesc should be immutable, like the rest of the desc 
classes. This can not be done 100% right now, as it requires the refactoring of 
the ImportSemanticAnalyzer, so the task will be finished only then.


> Clean up Add Partition
> --
>
> Key: HIVE-22028
> URL: https://issues.apache.org/jira/browse/HIVE-22028
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22028.01.patch, HIVE-22028.02.patch
>
>
> AlterTableAddPartitionDesc should be immutable, like the rest of the desc 
> classes. This can not be done 100% right now, as it requires the refactoring 
> of the ImportSemanticAnalyzer, so the task will be finished only then.
> Add Partition logic should be moved from Hive.java to 
> AlterTableAddPartitionOperation.java, only the metastore calls should remain 
> in Hive.java.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?focusedWorklogId=280686&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280686
 ]

ASF GitHub Bot logged work on HIVE-22003:
-

Author: ASF GitHub Bot
Created on: 22/Jul/19 23:07
Start Date: 22/Jul/19 23:07
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #729: HIVE-22003
URL: https://github.com/apache/hive/pull/729#discussion_r306069305
 
 

 ##
 File path: 
ql/src/test/results/clientpositive/perf/tez/constraints/query32.q.out
 ##
 @@ -115,7 +115,7 @@ Stage-0
   Select Operator [SEL_114] (rows=285116600 
width=119)
 Output:["_col0","_col1","_col2"]
 Filter Operator [FIL_112] (rows=285116600 
width=119)
-  predicate:(cs_ext_discount_amt is not null 
and cs_sold_date_sk is not null)
+  predicate:(cs_ext_discount_amt is not null 
and cs_sold_date_sk is not null and cs_item_sk BETWEEN 
DynamicValue(RS_28_item_i_item_sk_min) AND 
DynamicValue(RS_28_item_i_item_sk_max) and in_bloom_filter(cs_item_sk, 
DynamicValue(RS_28_item_i_item_sk_bloom_filter)))
 
 Review comment:
   That the SJ is not there will not lead to incorrect results, but can lead to 
a performance penalty.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280686)
Time Spent: 0.5h  (was: 20m)

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22028) Clean up Add Partition

2019-07-22 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22028:
--
Summary: Clean up Add Partition  (was: Make AlterTableAddPartitionDesc 
immutable)

> Clean up Add Partition
> --
>
> Key: HIVE-22028
> URL: https://issues.apache.org/jira/browse/HIVE-22028
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22028.01.patch, HIVE-22028.02.patch
>
>
> AlterTableAddPartitionDesc should be immutable, like the rest of the desc 
> classes. This can not be done 100% right now, as it requires the refactoring 
> of the ImportSemanticAnalyzer, so the task will be finished only then.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22009) CTLV with user specified location is not honoured

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890535#comment-16890535
 ] 

Hive QA commented on HIVE-22009:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975422/HIVE-22009.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16683 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18138/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18138/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18138/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975422 - PreCommit-HIVE-Build

> CTLV with user specified location is not honoured 
> --
>
> Key: HIVE-22009
> URL: https://issues.apache.org/jira/browse/HIVE-22009
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0, 3.1.1
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-22009-branch-3.1.patch, 
> HIVE-22009.1-branch-3.1.patch, HIVE-22009.1.patch, HIVE-22009.2.patch, 
> HIVE-22009.3.patch, HIVE-22009.patch
>
>
> Steps to repro :
>  
> {code:java}
> CREATE TABLE emp_table (id int, name string, salary int);
> insert into emp_table values(1,'a',2);
> CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1;
> CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION 
> '/tmp/emp_ext_table';
> show create table emp_ext_table;{code}
>  
> {code:java}
> ++
> | createtab_stmt |
> ++
> | CREATE EXTERNAL TABLE `emp_ext_table`( |
> | `id` int, |
> | `name` string, |
> | `salary` int) |
> | ROW FORMAT SERDE |
> | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' |
> | STORED AS INPUTFORMAT |
> | 'org.apache.hadoop.mapred.TextInputFormat' |
> | OUTPUTFORMAT |
> | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
> | LOCATION |
> | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' |
> | TBLPROPERTIES ( |
> | 'bucketing_version'='2', |
> | 'transient_lastDdlTime'='1563467962') |
> ++{code}
> Table Location is not '/tmp/emp_ext_table', instead location is set to 
> default warehouse path.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?focusedWorklogId=280690&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280690
 ]

ASF GitHub Bot logged work on HIVE-22003:
-

Author: ASF GitHub Bot
Created on: 22/Jul/19 23:15
Start Date: 22/Jul/19 23:15
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #729: HIVE-22003
URL: https://github.com/apache/hive/pull/729#discussion_r306070882
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java
 ##
 @@ -486,6 +510,75 @@ private static boolean 
sharedWorkOptimization(ParseContext pctx, SharedWorkOptim
 return mergedExecuted;
   }
 
+  private static void replaceSemijoinExpressions(TableScanOperator tsOp, 
List semijoinExprNodes) {
 
 Review comment:
   We are merging the discardable TS into the retainable TS. We remove the SJ 
expressions from the discardable TS since we are about to remove it from the 
operator tree (we preserve the rest of filter expressions without SJ 
expressions from this branch + the expressions with SJ coming from other branch 
so we can add them to the Filter operator on top of the TS).
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280690)
Time Spent: 40m  (was: 0.5h)

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?focusedWorklogId=280696&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280696
 ]

ASF GitHub Bot logged work on HIVE-22003:
-

Author: ASF GitHub Bot
Created on: 22/Jul/19 23:26
Start Date: 22/Jul/19 23:26
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #729: HIVE-22003
URL: https://github.com/apache/hive/pull/729#discussion_r306073523
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java
 ##
 @@ -486,6 +510,75 @@ private static boolean 
sharedWorkOptimization(ParseContext pctx, SharedWorkOptim
 return mergedExecuted;
   }
 
+  private static void replaceSemijoinExpressions(TableScanOperator tsOp, 
List semijoinExprNodes) {
 
 Review comment:
   So the SJ branch from retainable TS is pushed to discardable TS  so that it 
is later pushed to discardable TS's children filter op? Why do we want to do 
that? Shouldn't retainable SJ be retained with retainable FILTER op only? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280696)
Time Spent: 50m  (was: 40m)

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?focusedWorklogId=280703&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280703
 ]

ASF GitHub Bot logged work on HIVE-22003:
-

Author: ASF GitHub Bot
Created on: 22/Jul/19 23:46
Start Date: 22/Jul/19 23:46
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #729: HIVE-22003
URL: https://github.com/apache/hive/pull/729#discussion_r306078025
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java
 ##
 @@ -486,6 +510,75 @@ private static boolean 
sharedWorkOptimization(ParseContext pctx, SharedWorkOptim
 return mergedExecuted;
   }
 
+  private static void replaceSemijoinExpressions(TableScanOperator tsOp, 
List semijoinExprNodes) {
 
 Review comment:
   Yes, that is correct. But it is not the branch, it is only the expressions.
   
   The reason is that we have already verified that the SJ expressions hitting 
both TS operators are the same. At the current step we are already merging. 
Thus, what we want now is that the SJ expression from the retainable branch is 
on top of the discardable branch too. Since we already had a method to push the 
filter expressions on top of the discardable TS (`pushFilterToTopOfTableScan`), 
what I have done is that we remove the old SJ expressions from the discardable 
TS (and follow-up Filters if present) and we add the SJ expressions from the 
retainable TS, hence automatically they will be pushed on top of the 
discardable TS. Then we can just remove the discardable TS operator and connect 
its output operators with the retainable TS operator.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280703)
Time Spent: 1h  (was: 50m)

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?focusedWorklogId=280712&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280712
 ]

ASF GitHub Bot logged work on HIVE-22003:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 00:05
Start Date: 23/Jul/19 00:05
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on pull request #729: HIVE-22003
URL: https://github.com/apache/hive/pull/729#discussion_r306081962
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java
 ##
 @@ -486,6 +510,75 @@ private static boolean 
sharedWorkOptimization(ParseContext pctx, SharedWorkOptim
 return mergedExecuted;
   }
 
+  private static void replaceSemijoinExpressions(TableScanOperator tsOp, 
List semijoinExprNodes) {
 
 Review comment:
   This makes sense. Thanks for the explanation. Would you mind adding a 
comment to `replaceSemijoinExpressions`?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280712)
Time Spent: 1h 10m  (was: 1h)

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890560#comment-16890560
 ] 

Vineet Garg commented on HIVE-22003:


LGTM +1

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-12971) Hive Support for Kudu

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890563#comment-16890563
 ] 

Hive QA commented on HIVE-12971:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} llap-server in master has 83 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
47s{color} | {color:blue} itests/util in master has 44 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} kudu-handler in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18139/dev-support/hive-personality.sh
 |
| git revision | master / ac78f79 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18139/yetus/patch-mvninstall-kudu-handler.txt
 |
| modules | C: common llap-server kudu-handler . itests itests/qtest-kudu 
itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18139/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandl

[jira] [Commented] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified

2019-07-22 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890565#comment-16890565
 ] 

Jesus Camacho Rodriguez commented on HIVE-19113:


[~vgarg], can you take a look?
https://github.com/apache/hive/pull/731
Thanks

> Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
> --
>
> Key: HIVE-19113
> URL: https://issues.apache.org/jira/browse/HIVE-19113
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19113.01.patch, HIVE-19113.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The user's expectation of 
> "create external table bucketed (key int) clustered by (key) into 4 buckets 
> stored as orc;"
> is that the table will cluster the key into 4 buckets, while the file layout 
> does not do any actual clustering of rows.
> In the absence of a "SORTED BY", this can automatically do a "SORTED BY 
> (key)" to cluster the keys within the file as expected.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-12971) Hive Support for Kudu

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890568#comment-16890568
 ] 

Hive QA commented on HIVE-12971:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975428/HIVE-12971.3.patch

{color:green}SUCCESS:{color} +1 due to 9 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16704 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestSSL.testSSLFetch (batchId=281)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18139/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18139/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18139/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975428 - PreCommit-HIVE-Build

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19113?focusedWorklogId=280735&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280735
 ]

ASF GitHub Bot logged work on HIVE-19113:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 01:03
Start Date: 23/Jul/19 01:03
Worklog Time Spent: 10m 
  Work Description: t3rmin4t0r commented on pull request #731: HIVE-19113
URL: https://github.com/apache/hive/pull/731#discussion_r306091938
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedDynPartitionOptimizer.java
 ##
 @@ -276,8 +283,7 @@ public Object process(Node nd, Stack stack, 
NodeProcessorCtx procCtx,
 
   // Create ReduceSink operator
   ReduceSinkOperator rsOp = getReduceSinkOp(partitionPositions, 
sortPositions, sortOrder, sortNullOrder,
-  allRSCols, bucketColumns, 
numBuckets,
- fsParent, 
fsOp.getConf().getWriteType());
 
 Review comment:
   This line shows up a conflict in some of my merges - is this just a 
white-space change?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280735)
Time Spent: 20m  (was: 10m)

> Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
> --
>
> Key: HIVE-19113
> URL: https://issues.apache.org/jira/browse/HIVE-19113
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19113.01.patch, HIVE-19113.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The user's expectation of 
> "create external table bucketed (key int) clustered by (key) into 4 buckets 
> stored as orc;"
> is that the table will cluster the key into 4 buckets, while the file layout 
> does not do any actual clustering of rows.
> In the absence of a "SORTED BY", this can automatically do a "SORTED BY 
> (key)" to cluster the keys within the file as expected.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19113?focusedWorklogId=280737&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280737
 ]

ASF GitHub Bot logged work on HIVE-19113:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 01:07
Start Date: 23/Jul/19 01:07
Worklog Time Spent: 10m 
  Work Description: t3rmin4t0r commented on pull request #731: HIVE-19113
URL: https://github.com/apache/hive/pull/731#discussion_r306092544
 
 

 ##
 File path: 
ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction_3.q.out
 ##
 @@ -582,7 +584,7 @@ STAGE PLANS:
 Reducer 4 <- Reducer 3 (SIMPLE_EDGE)
 Reducer 5 <- Reducer 3 (SIMPLE_EDGE)
 Reducer 6 <- Reducer 3 (SIMPLE_EDGE)
-Reducer 7 <- Reducer 3 (CUSTOM_SIMPLE_EDGE)
 
 Review comment:
   Interesting switch here - suspicious 1:1, because the hash function for 
regular group-by != hash function for bucketing, needs attention
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280737)
Time Spent: 0.5h  (was: 20m)

> Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
> --
>
> Key: HIVE-19113
> URL: https://issues.apache.org/jira/browse/HIVE-19113
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19113.01.patch, HIVE-19113.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The user's expectation of 
> "create external table bucketed (key int) clustered by (key) into 4 buckets 
> stored as orc;"
> is that the table will cluster the key into 4 buckets, while the file layout 
> does not do any actual clustering of rows.
> In the absence of a "SORTED BY", this can automatically do a "SORTED BY 
> (key)" to cluster the keys within the file as expected.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890594#comment-16890594
 ] 

Hive QA commented on HIVE-21991:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
16s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} llap-server in master has 83 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 3 new + 493 unchanged - 0 
fixed = 496 total (was 493) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m  
3s{color} | {color:red} root: The patch generated 3 new + 525 unchanged - 0 
fixed = 528 total (was 525) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18140/dev-support/hive-personality.sh
 |
| git revision | master / ac78f79 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18140/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18140/yetus/diff-checkstyle-root.txt
 |
| modules | C: ql llap-server . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18140/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch, 
> HIVE-21991.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-21991) Upgrade ORC version to 1.5.6

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890603#comment-16890603
 ] 

Hive QA commented on HIVE-21991:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975441/HIVE-21991.3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16682 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge9] (batchId=29)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge9] 
(batchId=167)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18140/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18140/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18140/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975441 - PreCommit-HIVE-Build

> Upgrade ORC version to 1.5.6
> 
>
> Key: HIVE-21991
> URL: https://issues.apache.org/jira/browse/HIVE-21991
> Project: Hive
>  Issue Type: Task
>  Components: ORC
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21991.1.patch, HIVE-21991.2.patch, 
> HIVE-21991.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22028) Clean up Add Partition

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890626#comment-16890626
 ] 

Hive QA commented on HIVE-22028:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
7s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 6 new + 480 unchanged - 17 
fixed = 486 total (was 497) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} hcatalog/streaming: The patch generated 0 new + 95 
unchanged - 6 fixed = 95 total (was 101) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch streaming passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} ql generated 0 new + 2248 unchanged - 2 fixed = 2248 
total (was 2250) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} streaming in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} streaming in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-18141/dev-support/hive-personality.sh
 |
| git revision | master / ac78f79 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18141/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql hcatalog/streaming streaming U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-18141/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Clean up Add Partition
> --
>
> Key: HIVE-22028
> URL: https://issues.apache.org/jira/browse/HIVE-22028
> Project

[jira] [Commented] (HIVE-22028) Clean up Add Partition

2019-07-22 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890644#comment-16890644
 ] 

Hive QA commented on HIVE-22028:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12975445/HIVE-22028.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16682 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18141/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18141/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18141/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12975445 - PreCommit-HIVE-Build

> Clean up Add Partition
> --
>
> Key: HIVE-22028
> URL: https://issues.apache.org/jira/browse/HIVE-22028
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22028.01.patch, HIVE-22028.02.patch
>
>
> AlterTableAddPartitionDesc should be immutable, like the rest of the desc 
> classes. This can not be done 100% right now, as it requires the refactoring 
> of the ImportSemanticAnalyzer, so the task will be finished only then.
> Add Partition logic should be moved from Hive.java to 
> AlterTableAddPartitionOperation.java, only the metastore calls should remain 
> in Hive.java.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19113?focusedWorklogId=280788&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280788
 ]

ASF GitHub Bot logged work on HIVE-19113:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 03:43
Start Date: 23/Jul/19 03:43
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #731: HIVE-19113
URL: https://github.com/apache/hive/pull/731#discussion_r306117688
 
 

 ##
 File path: 
ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction_3.q.out
 ##
 @@ -582,7 +584,7 @@ STAGE PLANS:
 Reducer 4 <- Reducer 3 (SIMPLE_EDGE)
 Reducer 5 <- Reducer 3 (SIMPLE_EDGE)
 Reducer 6 <- Reducer 3 (SIMPLE_EDGE)
-Reducer 7 <- Reducer 3 (CUSTOM_SIMPLE_EDGE)
 
 Review comment:
   This is the INSERT branch of a merge statement. Bucketing key (`clustered by 
(a)`) is same that is present in the right outer join hence it seems fine.
   ```explain merge into acidTbl as t using nonAcidOrcTbl s ON t.a = s.a 
   WHEN MATCHED AND s.a > 8 THEN DELETE
   WHEN MATCHED THEN UPDATE SET b = 7
   WHEN NOT MATCHED THEN INSERT VALUES(s.a, s.b)```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280788)
Time Spent: 40m  (was: 0.5h)

> Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
> --
>
> Key: HIVE-19113
> URL: https://issues.apache.org/jira/browse/HIVE-19113
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19113.01.patch, HIVE-19113.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The user's expectation of 
> "create external table bucketed (key int) clustered by (key) into 4 buckets 
> stored as orc;"
> is that the table will cluster the key into 4 buckets, while the file layout 
> does not do any actual clustering of rows.
> In the absence of a "SORTED BY", this can automatically do a "SORTED BY 
> (key)" to cluster the keys within the file as expected.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19113?focusedWorklogId=280789&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280789
 ]

ASF GitHub Bot logged work on HIVE-19113:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 03:44
Start Date: 23/Jul/19 03:44
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #731: HIVE-19113
URL: https://github.com/apache/hive/pull/731#discussion_r306117688
 
 

 ##
 File path: 
ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction_3.q.out
 ##
 @@ -582,7 +584,7 @@ STAGE PLANS:
 Reducer 4 <- Reducer 3 (SIMPLE_EDGE)
 Reducer 5 <- Reducer 3 (SIMPLE_EDGE)
 Reducer 6 <- Reducer 3 (SIMPLE_EDGE)
-Reducer 7 <- Reducer 3 (CUSTOM_SIMPLE_EDGE)
 
 Review comment:
   This is the INSERT branch of a merge statement. Bucketing key (`clustered by 
(a)`) is same that is present in the right outer join hence it seems fine.
   ```
   explain merge into acidTbl as t using nonAcidOrcTbl s ON t.a = s.a 
   WHEN MATCHED AND s.a > 8 THEN DELETE
   WHEN MATCHED THEN UPDATE SET b = 7
   WHEN NOT MATCHED THEN INSERT VALUES(s.a, s.b)
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280789)
Time Spent: 50m  (was: 40m)

> Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
> --
>
> Key: HIVE-19113
> URL: https://issues.apache.org/jira/browse/HIVE-19113
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19113.01.patch, HIVE-19113.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The user's expectation of 
> "create external table bucketed (key int) clustered by (key) into 4 buckets 
> stored as orc;"
> is that the table will cluster the key into 4 buckets, while the file layout 
> does not do any actual clustering of rows.
> In the absence of a "SORTED BY", this can automatically do a "SORTED BY 
> (key)" to cluster the keys within the file as expected.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19113?focusedWorklogId=280790&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280790
 ]

ASF GitHub Bot logged work on HIVE-19113:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 03:46
Start Date: 23/Jul/19 03:46
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #731: HIVE-19113
URL: https://github.com/apache/hive/pull/731#discussion_r306118082
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedDynPartitionOptimizer.java
 ##
 @@ -276,8 +283,7 @@ public Object process(Node nd, Stack stack, 
NodeProcessorCtx procCtx,
 
   // Create ReduceSink operator
   ReduceSinkOperator rsOp = getReduceSinkOp(partitionPositions, 
sortPositions, sortOrder, sortNullOrder,
-  allRSCols, bucketColumns, 
numBuckets,
- fsParent, 
fsOp.getConf().getWriteType());
 
 Review comment:
   Yes, that's correct, it's only white-space/merge into single row.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280790)
Time Spent: 1h  (was: 50m)

> Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
> --
>
> Key: HIVE-19113
> URL: https://issues.apache.org/jira/browse/HIVE-19113
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19113.01.patch, HIVE-19113.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The user's expectation of 
> "create external table bucketed (key int) clustered by (key) into 4 buckets 
> stored as orc;"
> is that the table will cluster the key into 4 buckets, while the file layout 
> does not do any actual clustering of rows.
> In the absence of a "SORTED BY", this can automatically do a "SORTED BY 
> (key)" to cluster the keys within the file as expected.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19113?focusedWorklogId=280791&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280791
 ]

ASF GitHub Bot logged work on HIVE-19113:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 03:49
Start Date: 23/Jul/19 03:49
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #731: HIVE-19113
URL: https://github.com/apache/hive/pull/731#discussion_r306118417
 
 

 ##
 File path: 
ql/src/test/results/clientpositive/llap/dynamic_semijoin_reduction_3.q.out
 ##
 @@ -582,7 +584,7 @@ STAGE PLANS:
 Reducer 4 <- Reducer 3 (SIMPLE_EDGE)
 Reducer 5 <- Reducer 3 (SIMPLE_EDGE)
 Reducer 6 <- Reducer 3 (SIMPLE_EDGE)
-Reducer 7 <- Reducer 3 (CUSTOM_SIMPLE_EDGE)
 
 Review comment:
   In fact, the other branches (DELETE and UPDATE) did not change, which seems 
fine too.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280791)
Time Spent: 1h 10m  (was: 1h)

> Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
> --
>
> Key: HIVE-19113
> URL: https://issues.apache.org/jira/browse/HIVE-19113
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19113.01.patch, HIVE-19113.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The user's expectation of 
> "create external table bucketed (key int) clustered by (key) into 4 buckets 
> stored as orc;"
> is that the table will cluster the key into 4 buckets, while the file layout 
> does not do any actual clustering of rows.
> In the absence of a "SORTED BY", this can automatically do a "SORTED BY 
> (key)" to cluster the keys within the file as expected.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22010) Clean up ShowCreateTableOperation

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22010?focusedWorklogId=280797&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280797
 ]

ASF GitHub Bot logged work on HIVE-22010:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 04:12
Start Date: 23/Jul/19 04:12
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #732: HIVE-22010 - 
Clean up ShowCreateTableOperation
URL: https://github.com/apache/hive/pull/732#discussion_r306120864
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/creation/ShowCreateTableOperation.java
 ##
 @@ -73,198 +82,207 @@ public ShowCreateTableOperation(DDLOperationContext 
context, ShowCreateTableDesc
   public int execute() throws HiveException {
 // get the create table statement for the table and populate the output
 try (DataOutputStream outStream = DDLUtils.getOutputStream(new 
Path(desc.getResFile()), context)) {
-  return showCreateTable(outStream);
+  Table table = context.getDb().getTable(desc.getTableName(), false);
+  String command = table.isView() ?
+  getCreateViewCommand(table) :
+  getCreateTableCommand(table);
+  outStream.write(command.getBytes(StandardCharsets.UTF_8));
+  return 0;
+} catch (IOException e) {
+  LOG.info("show create table: ", e);
+  return 1;
 } catch (Exception e) {
   throw new HiveException(e);
 }
   }
 
-  private int showCreateTable(DataOutputStream outStream) throws HiveException 
{
-boolean needsLocation = true;
-StringBuilder createTabCommand = new StringBuilder();
+  private static final String CREATE_VIEW_COMMAND = "CREATE VIEW `%s` AS %s";
 
-Table tbl = context.getDb().getTable(desc.getTableName(), false);
-List duplicateProps = new ArrayList();
-try {
-  needsLocation = CreateTableOperation.doesTableNeedLocation(tbl);
+  private String getCreateViewCommand(Table table) {
+return String.format(CREATE_VIEW_COMMAND, desc.getTableName(), 
table.getViewExpandedText());
+  }
 
-  if (tbl.isView()) {
-String createTabStmt = "CREATE VIEW `" + desc.getTableName() + "` AS " 
+ tbl.getViewExpandedText();
-outStream.write(createTabStmt.getBytes(StandardCharsets.UTF_8));
-return 0;
-  }
+  private static final String CREATE_TABLE_TEMPLATE =
+  "CREATE <" + TEMPORARY + "><" + EXTERNAL + ">TABLE `<" + NAME + ">`(\n" +
+  "<" + LIST_COLUMNS + ">)\n" +
+  "<" + COMMENT + ">\n" +
+  "<" + PARTITIONS + ">\n" +
+  "<" + BUCKETS + ">\n" +
+  "<" + SKEWED + ">\n" +
+  "<" + ROW_FORMAT + ">\n" +
+  "<" + LOCATION_BLOCK + ">" +
+  "TBLPROPERTIES (\n" +
+  "<" + PROPERTIES + ">)\n";
 
-  createTabCommand.append("CREATE <" + TEMPORARY + "><" + EXTERNAL + 
">TABLE `");
-  createTabCommand.append(desc.getTableName() + "`(\n");
-  createTabCommand.append("<" + LIST_COLUMNS + ">)\n");
-  createTabCommand.append("<" + TBL_COMMENT + ">\n");
 
 Review comment:
   It seems we have removed some of the information we were printing, e.g., 
`TBL_COMMENT` or `TBL_LOCATION`. Is this because they will be included in the 
table properties?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280797)
Time Spent: 20m  (was: 10m)

> Clean up ShowCreateTableOperation
> -
>
> Key: HIVE-22010
> URL: https://issues.apache.org/jira/browse/HIVE-22010
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22010.01.patch, HIVE-22010.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22010) Clean up ShowCreateTableOperation

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22010?focusedWorklogId=280798&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280798
 ]

ASF GitHub Bot logged work on HIVE-22010:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 04:12
Start Date: 23/Jul/19 04:12
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #732: HIVE-22010 - 
Clean up ShowCreateTableOperation
URL: https://github.com/apache/hive/pull/732#discussion_r306119196
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java
 ##
 @@ -186,6 +186,8 @@ static void internalBeforeClassSetup(Map 
additionalProperties, b
 });
 
 MetaStoreTestUtils.startMetaStoreWithRetry(hconf);
+// re set the WAREHOUSE property to the test dir, as the previous command 
added a random port to it
+hconf.set(MetastoreConf.ConfVars.WAREHOUSE.getVarname(), 
System.getProperty("test.warehouse.dir", "/tmp"));
 
 Review comment:
   I assume we expect the property to exist hence `/tmp` would not make a 
difference?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280798)
Time Spent: 20m  (was: 10m)

> Clean up ShowCreateTableOperation
> -
>
> Key: HIVE-22010
> URL: https://issues.apache.org/jira/browse/HIVE-22010
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22010.01.patch, HIVE-22010.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22010) Clean up ShowCreateTableOperation

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22010?focusedWorklogId=280799&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280799
 ]

ASF GitHub Bot logged work on HIVE-22010:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 04:12
Start Date: 23/Jul/19 04:12
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #732: HIVE-22010 - 
Clean up ShowCreateTableOperation
URL: https://github.com/apache/hive/pull/732#discussion_r306120365
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/creation/ShowCreateTableOperation.java
 ##
 @@ -50,20 +53,26 @@
 import org.apache.hive.common.util.HiveStringUtils;
 import org.stringtemplate.v4.ST;
 
+import com.google.common.collect.ImmutableSet;
+
+import avro.shaded.com.google.common.collect.Sets;
 
 Review comment:
   Probably you did not want to use the shaded guava here.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280799)
Time Spent: 0.5h  (was: 20m)

> Clean up ShowCreateTableOperation
> -
>
> Key: HIVE-22010
> URL: https://issues.apache.org/jira/browse/HIVE-22010
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22010.01.patch, HIVE-22010.02.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22003:
---
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks [~vgarg]

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-22003) Shared work optimizer may leave semijoin branches in plan that are not used

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22003?focusedWorklogId=280805&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280805
 ]

ASF GitHub Bot logged work on HIVE-22003:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 04:42
Start Date: 23/Jul/19 04:42
Worklog Time Spent: 10m 
  Work Description: asfgit commented on pull request #729: HIVE-22003
URL: https://github.com/apache/hive/pull/729
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280805)
Time Spent: 1h 20m  (was: 1h 10m)

> Shared work optimizer may leave semijoin branches in plan that are not used
> ---
>
> Key: HIVE-22003
> URL: https://issues.apache.org/jira/browse/HIVE-22003
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22003.01.patch, HIVE-22003.01.patch, 
> HIVE-22003.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This may happen only when the TS are the only operators that are shared. 
> Repro attached in q file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=280834&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280834
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 06:56
Start Date: 23/Jul/19 06:56
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306130094
 
 

 ##
 File path: 
itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java
 ##
 @@ -778,4 +778,27 @@ public MiniDruidLlapLocalCliConfig() {
 }
   }
 
+  /**
+   * The CliConfig implementation for Kudu.
+   */
+  public static class KuduCliConfig extends AbstractCliConfig {
+public KuduCliConfig() {
+  super(CoreKuduCliDriver.class);
+  try {
+setQueryDir("kudu-handler/src/test/queries/positive");
+
+setResultsDir("kudu-handler/src/test/results/positive");
+setLogDir("itests/qtest/target/qfile-results/kudu-handler/positive");
+
+setInitScript("q_test_init_src.sql");
+setCleanupScript("q_test_cleanup_src.sql");
+
+setHiveConfDir("");
+setClusterType(MiniClusterType.NONE);
 
 Review comment:
   Cluster type `NONE` seems to be `MR`.
   We should probably create a type `KUDU_LOCAL`, e.g., similar to 
`DRUID_LOCAL` (see `MiniDruidLlapLocalCliConfig` in `CliConfigs`). That would 
be useful to test Kudu with LLAP, in addition tests will run faster, etc.
   Can you try to change it? If you find any major issues, please create a 
follow-up JIRA for it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280834)
Time Spent: 1h  (was: 50m)

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=280841&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280841
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 06:56
Start Date: 23/Jul/19 06:56
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306145974
 
 

 ##
 File path: 
kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduPredicateHandler.java
 ##
 @@ -0,0 +1,175 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.kudu;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.ql.exec.SerializationUtilities;
+import org.apache.hadoop.hive.ql.index.IndexPredicateAnalyzer;
+import org.apache.hadoop.hive.ql.index.IndexSearchCondition;
+import 
org.apache.hadoop.hive.ql.metadata.HiveStoragePredicateHandler.DecomposedPredicate;
+import org.apache.hadoop.hive.ql.plan.ExprNodeDesc;
+import org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc;
+import org.apache.hadoop.hive.ql.plan.TableScanDesc;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDF;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqual;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrLessThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPGreaterThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPLessThan;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNotNull;
+import org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPNull;
+import org.apache.kudu.ColumnSchema;
+import org.apache.kudu.Schema;
+import org.apache.kudu.client.KuduPredicate;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Contains static methods for decomposing predicate/filter expressions and
+ * getting the equivalent Kudu predicates.
+ */
+public final class KuduPredicateHandler {
+  static final Logger LOG = 
LoggerFactory.getLogger(KuduPredicateHandler.class);
+
+  private KuduPredicateHandler() {}
+
+  /**
+   * Analyzes the predicates and return the portion of it which
+   * cannot be evaluated by Kudu during table access.
+   *
+   * @param predicateExpr predicate to be decomposed
+   * @param schema the schema of the Kudu table
+   * @return decomposed form of predicate, or null if no pushdown is possible 
at all
+   */
+  public static DecomposedPredicate decompose(ExprNodeDesc predicateExpr, 
Schema schema) {
+IndexPredicateAnalyzer analyzer = newAnalyzer(schema);
+List sConditions = new ArrayList<>();
+ExprNodeDesc residualPredicate = analyzer.analyzePredicate(predicateExpr, 
sConditions);
+
+// Nothing to decompose.
+if (sConditions.size() == 0) {
+  return null;
+}
+
+DecomposedPredicate decomposedPredicate = new DecomposedPredicate();
+decomposedPredicate.pushedPredicate = 
analyzer.translateSearchConditions(sConditions);
+decomposedPredicate.residualPredicate = (ExprNodeGenericFuncDesc) 
residualPredicate;
+return decomposedPredicate;
+  }
+
+  /**
+   * Returns the list of Kudu predicates from the passed configuration.
+   *
+   * @param conf the execution configuration
+   * @param schema the schema of the Kudu table
+   * @return the list of Kudu predicates
+   */
+  public static List getPredicates(Configuration conf, Schema 
schema) {
+List predicates = new ArrayList<>();
+for (IndexSearchCondition sc : getSearchConditions(conf, schema)) {
+  predicates.add(conditionToPredicate(sc, schema));
+}
+return predicates;
+  }
+
+  private static List getSearchConditions(Configuration 
conf, Schema schema) {
+List conditions = new ArrayList<>();
+ExprNodeDesc fi

[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=280846&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280846
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 06:56
Start Date: 23/Jul/19 06:56
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306153053
 
 

 ##
 File path: kudu-handler/src/test/queries/positive/kudu_queries.q
 ##
 @@ -0,0 +1,148 @@
+--! qt:dataset:src
+
+-- Create table specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS kv_table;
+CREATE EXTERNAL TABLE kv_table(key int, value string)
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_kv");
+
+DESCRIBE EXTENDED kv_table;
+
+-- Verify INSERT support.
+INSERT INTO TABLE kv_table VALUES
+(1, "1"), (2, "2");
+
+SELECT * FROM kv_table;
+SELECT count(*) FROM kv_table;
+SELECT count(*) FROM kv_table LIMIT 1;
+SELECT count(1) FROM kv_table;
+
+-- Verify projection and case insensitivity.
+SELECT kEy FROM kv_table;
+
+DROP TABLE kv_table;
+
+-- Create table without specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS all_types_table;
+CREATE EXTERNAL TABLE all_types_table
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_all_types");
+
+DESCRIBE EXTENDED all_types_table;
+
+INSERT INTO TABLE all_types_table VALUES
+(1, 1, 1, 1, true, 1.1, 1.1, "one", 'one', '2011-11-11 11:11:11', 1.111, null, 
1),
+(2, 2, 2, 2, false, 2.2, 2.2, "two", 'two', '2012-12-12 12:12:12', 2.222, 
null, 2);
+
+SELECT * FROM all_types_table;
+SELECT count(*) FROM all_types_table;
+
+-- Verify comparison predicates on byte.
+SELECT key FROM all_types_table WHERE key = 1;
 
 Review comment:
   In addition to `SELECT...` queries, add a few `EXPLAIN SELECT...` plans to q 
file so we can verify that computation is being pushed to Kudu as expected, etc.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280846)
Time Spent: 2h 10m  (was: 2h)

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this functionality incrementally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=280844&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280844
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 06:56
Start Date: 23/Jul/19 06:56
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306154418
 
 

 ##
 File path: kudu-handler/src/java/org/apache/hadoop/hive/kudu/KuduHiveUtils.java
 ##
 @@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.kudu;
+
+import java.security.AccessController;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+import javax.security.auth.Subject;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.utils.StringUtils;
+import org.apache.hadoop.hive.serde2.SerDeException;
+import org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo;
+import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.security.token.Token;
+import org.apache.kudu.ColumnTypeAttributes;
+import org.apache.kudu.Type;
+import org.apache.kudu.client.KuduClient;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.hadoop.hive.kudu.KuduStorageHandler.KUDU_MASTER_ADDRS_KEY;
+
+/**
+ * A collection of static utility methods for the Kudu Hive integration.
+ * This is useful for code sharing.
+ */
+public final class KuduHiveUtils {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(KuduHiveUtils.class);
+
+  private static final Text KUDU_TOKEN_KIND = new Text("kudu-authn-data");
+
+  private KuduHiveUtils() {}
+
+  /**
+   * Returns the union of the configuration and table properties with the
+   * table properties taking precedence.
+   */
+  public static Configuration createOverlayedConf(Configuration conf, 
Properties tblProps) {
+Configuration newConf = new Configuration(conf);
+for (Map.Entry prop : tblProps.entrySet()) {
+  newConf.set((String) prop.getKey(), (String) prop.getValue());
+}
+return newConf;
+  }
+
+  public static String getMasterAddresses(Configuration conf) {
+// Load the default configuration.
+String masterAddresses = HiveConf.getVar(conf, 
HiveConf.ConfVars.HIVE_KUDU_MASTER_ADDRESSES_DEFAULT);
+if (StringUtils.isEmpty(masterAddresses)) {
+  throw new IllegalStateException("Kudu master addresses not specified in 
configuration");
 
 Review comment:
   We should add a negative q test for this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 280844)

> Hive Support for Kudu
> -
>
> Key: HIVE-12971
> URL: https://issues.apache.org/jira/browse/HIVE-12971
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Lenni Kuff
>Assignee: Grant Henke
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-12971.0.patch, HIVE-12971.1.patch, 
> HIVE-12971.2.patch, HIVE-12971.3.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> JIRA for tracking work related to Hive/Kudu integration.
> It would be useful to allow Kudu data to be accessible via Hive. This would 
> involve creating a Kudu SerDe/StorageHandler and implementing support for 
> QUERY and DML commands like SELECT, INSERT, UPDATE, and DELETE. Kudu 
> Input/OutputFormats classes already exist. The work can be staged to support 
> this 

[jira] [Work logged] (HIVE-12971) Hive Support for Kudu

2019-07-22 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12971?focusedWorklogId=280845&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-280845
 ]

ASF GitHub Bot logged work on HIVE-12971:
-

Author: ASF GitHub Bot
Created on: 23/Jul/19 06:56
Start Date: 23/Jul/19 06:56
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #733: HIVE-12971: 
Add Support for Kudu Tables
URL: https://github.com/apache/hive/pull/733#discussion_r306152211
 
 

 ##
 File path: kudu-handler/src/test/queries/positive/kudu_queries.q
 ##
 @@ -0,0 +1,148 @@
+--! qt:dataset:src
+
+-- Create table specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS kv_table;
+CREATE EXTERNAL TABLE kv_table(key int, value string)
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_kv");
+
+DESCRIBE EXTENDED kv_table;
+
+-- Verify INSERT support.
+INSERT INTO TABLE kv_table VALUES
+(1, "1"), (2, "2");
+
+SELECT * FROM kv_table;
+SELECT count(*) FROM kv_table;
+SELECT count(*) FROM kv_table LIMIT 1;
+SELECT count(1) FROM kv_table;
+
+-- Verify projection and case insensitivity.
+SELECT kEy FROM kv_table;
+
+DROP TABLE kv_table;
+
+-- Create table without specifying columns.
+-- Note: Kudu is the source of truth for schema.
+DROP TABLE IF EXISTS all_types_table;
+CREATE EXTERNAL TABLE all_types_table
+STORED BY 'org.apache.hadoop.hive.kudu.KuduStorageHandler'
+TBLPROPERTIES ("kudu.table_name" = "default.kudu_all_types");
+
+DESCRIBE EXTENDED all_types_table;
+
+INSERT INTO TABLE all_types_table VALUES
+(1, 1, 1, 1, true, 1.1, 1.1, "one", 'one', '2011-11-11 11:11:11', 1.111, null, 
1),
+(2, 2, 2, 2, false, 2.2, 2.2, "two", 'two', '2012-12-12 12:12:12', 2.222, 
null, 2);
+
+SELECT * FROM all_types_table;
+SELECT count(*) FROM all_types_table;
+
+-- Verify comparison predicates on byte.
+SELECT key FROM all_types_table WHERE key = 1;
+SELECT key FROM all_types_table WHERE key != 1;
+SELECT key FROM all_types_table WHERE key > 1;
+SELECT key FROM all_types_table WHERE key >= 1;
+SELECT key FROM all_types_table WHERE key < 2;
+SELECT key FROM all_types_table WHERE key <= 2;
+
+-- Verify comparison predicates on short.
+SELECT key FROM all_types_table WHERE int16 = 1;
+SELECT key FROM all_types_table WHERE int16 != 1;
+SELECT key FROM all_types_table WHERE int16 > 1;
+SELECT key FROM all_types_table WHERE int16 >= 1;
+SELECT key FROM all_types_table WHERE int16 < 2;
+SELECT key FROM all_types_table WHERE int16 <= 2;
+
+-- Verify comparison predicates on int.
+SELECT key FROM all_types_table WHERE int32 = 1;
+SELECT key FROM all_types_table WHERE int32 != 1;
+SELECT key FROM all_types_table WHERE int32 > 1;
+SELECT key FROM all_types_table WHERE int32 >= 1;
+SELECT key FROM all_types_table WHERE int32 < 2;
+SELECT key FROM all_types_table WHERE int32 <= 2;
+
+-- Verify comparison predicates on long.
+SELECT key FROM all_types_table WHERE int64 = 1;
+SELECT key FROM all_types_table WHERE int64 != 1;
+SELECT key FROM all_types_table WHERE int64 > 1;
+SELECT key FROM all_types_table WHERE int64 >= 1;
+SELECT key FROM all_types_table WHERE int64 < 2;
+SELECT key FROM all_types_table WHERE int64 <= 2;
+
+-- Verify comparison predicates on boolean.
+SELECT key FROM all_types_table WHERE bool = true;
+SELECT key FROM all_types_table WHERE bool != true;
+SELECT key FROM all_types_table WHERE bool > true;
+SELECT key FROM all_types_table WHERE bool >= true;
+SELECT key FROM all_types_table WHERE bool < false;
+SELECT key FROM all_types_table WHERE bool <= false;
+
+-- Verify comparison predicates on string.
+-- Note: string is escaped because it's a reserved word.
+SELECT key FROM all_types_table WHERE `string` = "one";
+SELECT key FROM all_types_table WHERE `string` != "one";
+SELECT key FROM all_types_table WHERE `string` > "one";
+SELECT key FROM all_types_table WHERE `string` >= "one";
+SELECT key FROM all_types_table WHERE `string` < "two";
+SELECT key FROM all_types_table WHERE `string` <= "two";
+
+-- Verify comparison predicates on binary.
+-- Note: binary is escaped because it's a reserved word.
+SELECT key FROM all_types_table WHERE `binary` = cast ('one' as binary);
+SELECT key FROM all_types_table WHERE `binary` != cast ('one' as binary);
+SELECT key FROM all_types_table WHERE `binary` > cast ('one' as binary);
+SELECT key FROM all_types_table WHERE `binary` >= cast ('one' as binary);
+SELECT key FROM all_types_table WHERE `binary` < cast ('two' as binary);
+SELECT key FROM all_types_table WHERE `binary` <= cast ('two' as binary);
+
+-- Verify comparison predicates on timestamp.
+-- Note: timestamp is escaped because it's a reserved word.
+SELECT key FROM all_types_table WHERE `timestamp` = '2011-11-11 11:11:11';
+SELECT key FROM all_types_table WHERE `timestamp` != '2011-11-11 11:11:11';
+SELECT key FROM all_

  1   2   >