[jira] [Updated] (HIVE-20175) Missing ASF for some class with druid

2018-07-13 Thread Saijin Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saijin Huang updated HIVE-20175:

Status: Patch Available  (was: Open)

> Missing ASF for some  class with druid
> --
>
> Key: HIVE-20175
> URL: https://issues.apache.org/jira/browse/HIVE-20175
> Project: Hive
>  Issue Type: Bug
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Trivial
> Attachments: HIVE-20175.1.patch
>
>
> when testing druid for unit testing,some class miss the ASF header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20175) Missing ASF for some class with druid

2018-07-13 Thread Saijin Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saijin Huang updated HIVE-20175:

Attachment: HIVE-20175.1.patch

> Missing ASF for some  class with druid
> --
>
> Key: HIVE-20175
> URL: https://issues.apache.org/jira/browse/HIVE-20175
> Project: Hive
>  Issue Type: Bug
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Trivial
> Attachments: HIVE-20175.1.patch
>
>
> when testing druid for unit testing,some class miss the ASF header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20175) Missing ASF for some class with druid

2018-07-13 Thread Saijin Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saijin Huang updated HIVE-20175:

Description: when testing druid for unit testing,some class miss the ASF 
header

> Missing ASF for some  class with druid
> --
>
> Key: HIVE-20175
> URL: https://issues.apache.org/jira/browse/HIVE-20175
> Project: Hive
>  Issue Type: Bug
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Trivial
>
> when testing druid for unit testing,some class miss the ASF header



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20175) Missing ASF for some class with druid

2018-07-13 Thread Saijin Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saijin Huang reassigned HIVE-20175:
---


> Missing ASF for some  class with druid
> --
>
> Key: HIVE-20175
> URL: https://issues.apache.org/jira/browse/HIVE-20175
> Project: Hive
>  Issue Type: Improvement
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20175) Missing ASF for some class with druid

2018-07-13 Thread Saijin Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saijin Huang updated HIVE-20175:

Issue Type: Bug  (was: Improvement)

> Missing ASF for some  class with druid
> --
>
> Key: HIVE-20175
> URL: https://issues.apache.org/jira/browse/HIVE-20175
> Project: Hive
>  Issue Type: Bug
>Reporter: Saijin Huang
>Assignee: Saijin Huang
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20006) Make materializations invalidation cache work with multiple active remote metastores

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20006:
---
Description: 
The main points:
 - Only MVs that use transactional tables can have a time window value of 0. 
Those are the only MVs that can be guaranteed to not be outdated when a query 
is executed.
 - For MVs that +cannot be outdated+, comparison is based on valid write id 
lists.
 - For MVs that +can be outdated+:
 ** The window for valid outdated MVs can be specified in intervals of 1 minute.
 ** A materialized view is outdated if it was built before that time window and 
any source table has been modified since.

A time window of -1 means to always use the materialized view for rewriting 
without any checks concerning its validity. If a materialized view uses an 
external table, the only way to trigger the rewriting would be to set the 
property to -1, since currently we do not capture for validation purposes 
whether the external source tables have been modified since the MV was created 
or not.

  was:
The main points:
 - Only MVs that use transactional tables and are stored in transactional 
tables can have a time window value of 0. Those are the only MVs that can be 
guaranteed to not be outdated when a query is executed.
 - For MVs that +cannot be outdated+, comparison is based on valid write id 
lists.
 - For MVs that +can be outdated+:
 ** The window for valid outdated MVs can be specified in intervals of 1 minute.
 ** A materialized view is outdated if it was built before that time window and 
any source table has been modified since.

A time window of -1 means to always use the materialized view for rewriting 
without any checks concerning its validity. If a materialized view uses an 
external table, the only way to trigger the rewriting would be to set the 
property to -1, since currently we do not capture for validation purposes 
whether the external source tables have been modified since the MV was created 
or not.


> Make materializations invalidation cache work with multiple active remote 
> metastores
> 
>
> Key: HIVE-20006
> URL: https://issues.apache.org/jira/browse/HIVE-20006
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Fix For: 4.0.0
>
> Attachments: HIVE-19027.01.patch, HIVE-19027.02.patch, 
> HIVE-19027.03.patch, HIVE-19027.04.patch, HIVE-20006.01.patch, 
> HIVE-20006.02.patch, HIVE-20006.03.patch, HIVE-20006.04.patch, 
> HIVE-20006.05.patch, HIVE-20006.06.patch, HIVE-20006.07.patch, 
> HIVE-20006.patch
>
>
> The main points:
>  - Only MVs that use transactional tables can have a time window value of 0. 
> Those are the only MVs that can be guaranteed to not be outdated when a query 
> is executed.
>  - For MVs that +cannot be outdated+, comparison is based on valid write id 
> lists.
>  - For MVs that +can be outdated+:
>  ** The window for valid outdated MVs can be specified in intervals of 1 
> minute.
>  ** A materialized view is outdated if it was built before that time window 
> and any source table has been modified since.
> A time window of -1 means to always use the materialized view for rewriting 
> without any checks concerning its validity. If a materialized view uses an 
> external table, the only way to trigger the rewriting would be to set the 
> property to -1, since currently we do not capture for validation purposes 
> whether the external source tables have been modified since the MV was 
> created or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19027) Make materializations invalidation cache work with multiple active remote metastores

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19027:
---
Description: 
The main points:
 - Only MVs that use transactional tables can have a time window value of 0. 
Those are the only MVs that can be guaranteed to not be outdated when a query 
is executed.
 - For MVs that +cannot be outdated+, comparison is based on valid write id 
lists.
 - For MVs that +can be outdated+:
 ** The window for valid outdated MVs can be specified in intervals of 1 minute.
 ** A materialized view is outdated if it was built before that time window and 
any source table has been modified since.

A time window of -1 means to always use the materialized view for rewriting 
without any checks concerning its validity. If a materialized view uses an 
external table, the only way to trigger the rewriting would be to set the 
property to -1, since currently we do not capture for validation purposes 
whether the external source tables have been modified since the MV was created 
or not.

  was:
The main points:
 - Only MVs that use transactional tables and are stored in transactional 
tables can have a time window value of 0. Those are the only MVs that can be 
guaranteed to not be outdated when a query is executed.
 - For MVs that +cannot be outdated+, comparison is based on valid write id 
lists.
 - For MVs that +can be outdated+:
 ** The window for valid outdated MVs can be specified in intervals of 1 minute.
 ** A materialized view is outdated if it was built before that time window and 
any source table has been modified since.

A time window of -1 means to always use the materialized view for rewriting 
without any checks concerning its validity. If a materialized view uses an 
external table, the only way to trigger the rewriting would be to set the 
property to -1, since currently we do not capture for validation purposes 
whether the external source tables have been modified since the MV was created 
or not.


> Make materializations invalidation cache work with multiple active remote 
> metastores
> 
>
> Key: HIVE-19027
> URL: https://issues.apache.org/jira/browse/HIVE-19027
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Fix For: 3.1.0
>
> Attachments: HIVE-19027.01.patch, HIVE-19027.02.patch, 
> HIVE-19027.03.patch, HIVE-19027.04.patch
>
>
> The main points:
>  - Only MVs that use transactional tables can have a time window value of 0. 
> Those are the only MVs that can be guaranteed to not be outdated when a query 
> is executed.
>  - For MVs that +cannot be outdated+, comparison is based on valid write id 
> lists.
>  - For MVs that +can be outdated+:
>  ** The window for valid outdated MVs can be specified in intervals of 1 
> minute.
>  ** A materialized view is outdated if it was built before that time window 
> and any source table has been modified since.
> A time window of -1 means to always use the materialized view for rewriting 
> without any checks concerning its validity. If a materialized view uses an 
> external table, the only way to trigger the rewriting would be to set the 
> property to -1, since currently we do not capture for validation purposes 
> whether the external source tables have been modified since the MV was 
> created or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19360) CBO: Add an "optimizedSQL" to QueryPlan object

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19360:
---
Attachment: HIVE-19360.7.patch

> CBO: Add an "optimizedSQL" to QueryPlan object 
> ---
>
> Key: HIVE-19360
> URL: https://issues.apache.org/jira/browse/HIVE-19360
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO, Diagnosability
>Affects Versions: 3.1.0
>Reporter: Gopal V
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-19360.1.patch, HIVE-19360.2.patch, 
> HIVE-19360.3.patch, HIVE-19360.4.patch, HIVE-19360.5.patch, 
> HIVE-19360.6.patch, HIVE-19360.7.patch
>
>
> Calcite RelNodes can be converted back into SQL (as the new JDBC storage 
> handler does), which allows Hive to print out the post CBO plan as a SQL 
> query instead of having to guess the join orders from the subsequent Tez plan.
> The query generated might not be always valid SQL at this point, but is a 
> world ahead of DAG plans in readability.
> Eg. tpc-ds Query4 CTEs gets expanded to
> {code}
> SELECT t16.$f3 customer_preferred_cust_flag
> FROM
>   (SELECT t0.c_customer_id $f0,
>SUM((t2.ws_ext_list_price - 
> t2.ws_ext_wholesale_cost - t2.ws_ext_discount_amt + t2.ws_ext_sales_price) / 
> CAST(2 AS DECIMAL(10, 0))) $f8
>FROM
>  (SELECT c_customer_sk,
>  c_customer_id,
>  c_first_name,
>  c_last_name,
>  c_preferred_cust_flag,
>  c_birth_country,
>  c_login,
>  c_email_address
>   FROM default.customer
>   WHERE c_customer_sk IS NOT NULL
> AND c_customer_id IS NOT NULL) t0
>INNER JOIN (
>  (SELECT ws_sold_date_sk,
>  ws_bill_customer_sk,
>  ws_ext_discount_amt,
>  ws_ext_sales_price,
>  ws_ext_wholesale_cost,
>  ws_ext_list_price
>   FROM default.web_sales
>   WHERE ws_bill_customer_sk IS NOT NULL
> AND ws_sold_date_sk IS NOT NULL) t2
>INNER JOIN
>  (SELECT d_date_sk,
>  CAST(2002 AS INTEGER) d_year
>   FROM default.date_dim
>   WHERE d_year = 2002
> AND d_date_sk IS NOT NULL) t4 ON t2.ws_sold_date_sk = 
> t4.d_date_sk) ON t0.c_customer_sk = t2.ws_bill_customer_sk
>GROUP BY t0.c_customer_id,
> t0.c_first_name,
> t0.c_last_name,
> t0.c_preferred_cust_flag,
> t0.c_birth_country,
> t0.c_login,
> t0.c_email_address) t7
> INNER JOIN (
>   (SELECT t9.c_customer_id $f0,
>t9.c_preferred_cust_flag $f3,
> 
> SUM((t11.ss_ext_list_price - t11.ss_ext_wholesale_cost - 
> t11.ss_ext_discount_amt + t11.ss_ext_sales_price) / CAST(2 AS DECIMAL(10, 
> 0))) $f8
>FROM
>  (SELECT c_customer_sk,
>  c_customer_id,
>  c_first_name,
>  c_last_name,
>  c_preferred_cust_flag,
>  c_birth_country,
>  c_login,
>  c_email_address
>   FROM default.customer
>   WHERE c_customer_sk IS NOT NULL
> AND c_customer_id IS NOT NULL) t9
>INNER JOIN (
>  (SELECT ss_sold_date_sk,
>  ss_customer_sk,
>  ss_ext_discount_amt,
>  ss_ext_sales_price,
>  ss_ext_wholesale_cost,
>  ss_ext_list_price
>   FROM default.store_sales
>   WHERE ss_customer_sk IS NOT NULL
> AND ss_sold_date_sk IS NOT NULL) t11
>INNER JOIN
>  (SELECT d_date_sk,
>  CAST(2002 AS INTEGER) d_year
>   FROM default.date_dim
>   WHERE d_year = 2002
> AND d_date_sk IS NOT NULL) t13 ON 
> t11.ss_sold_date_sk = t13.d_date_sk) ON t9.c_customer_sk = t11.ss_customer_sk
>GROUP BY t9.c_customer_id,
> t9.c_first_name,
> t9.c_last_name,
> t9.c_preferred_cust_flag,
> t9.c_birth_count

[jira] [Commented] (HIVE-20163) Simplify StringSubstrColStart Initialization

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16544078#comment-16544078
 ] 

Hive QA commented on HIVE-20163:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 2291 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} ql: The patch generated 0 new + 0 unchanged - 4 
fixed = 0 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12602/dev-support/hive-personality.sh
 |
| git revision | master / ab9e954 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12602/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Simplify StringSubstrColStart Initialization
> 
>
> Key: HIVE-20163
> URL: https://issues.apache.org/jira/browse/HIVE-20163
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20163.1.patch, HIVE-20163.2.patch
>
>
> * Remove code
> * Remove exception handling
> * Remove {{printStackTrace}} call



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20006) Make materializations invalidation cache work with multiple active remote metastores

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20006:
---
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Patch had already been reviewed in HIVE-19027. Pushed to master

> Make materializations invalidation cache work with multiple active remote 
> metastores
> 
>
> Key: HIVE-20006
> URL: https://issues.apache.org/jira/browse/HIVE-20006
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Fix For: 4.0.0
>
> Attachments: HIVE-19027.01.patch, HIVE-19027.02.patch, 
> HIVE-19027.03.patch, HIVE-19027.04.patch, HIVE-20006.01.patch, 
> HIVE-20006.02.patch, HIVE-20006.03.patch, HIVE-20006.04.patch, 
> HIVE-20006.05.patch, HIVE-20006.06.patch, HIVE-20006.07.patch, 
> HIVE-20006.patch
>
>
> The main points:
>  - Only MVs that use transactional tables and are stored in transactional 
> tables can have a time window value of 0. Those are the only MVs that can be 
> guaranteed to not be outdated when a query is executed.
>  - For MVs that +cannot be outdated+, comparison is based on valid write id 
> lists.
>  - For MVs that +can be outdated+:
>  ** The window for valid outdated MVs can be specified in intervals of 1 
> minute.
>  ** A materialized view is outdated if it was built before that time window 
> and any source table has been modified since.
> A time window of -1 means to always use the materialized view for rewriting 
> without any checks concerning its validity. If a materialized view uses an 
> external table, the only way to trigger the rewriting would be to set the 
> property to -1, since currently we do not capture for validation purposes 
> whether the external source tables have been modified since the MV was 
> created or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19166) TestMiniLlapLocalCliDriver sysdb failure

2018-07-13 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16544069#comment-16544069
 ] 

Daniel Dai commented on HIVE-19166:
---

There is real issue in information schema when there's no restriction (security 
off). Patch also include changes in Vineet's last patch (removing "select 
sequence_name from sequence_table order by sequence_name limit 5
").

> TestMiniLlapLocalCliDriver sysdb failure
> 
>
> Key: HIVE-19166
> URL: https://issues.apache.org/jira/browse/HIVE-19166
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Vineet Garg
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19166.04.patch, HIVE-19166.05.patch, 
> HIVE-19166.06.patch, HIVE-19166.09.patch, HIVE-19166.1.patch, 
> HIVE-19166.10.patch, HIVE-19166.11.patch, HIVE-19166.12.patch, 
> HIVE-19166.2.patch, HIVE-19166.3.patch
>
>
> Broken by HIVE-18715



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19166) TestMiniLlapLocalCliDriver sysdb failure

2018-07-13 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-19166:
--
Attachment: HIVE-19166.12.patch

> TestMiniLlapLocalCliDriver sysdb failure
> 
>
> Key: HIVE-19166
> URL: https://issues.apache.org/jira/browse/HIVE-19166
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Vineet Garg
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19166.04.patch, HIVE-19166.05.patch, 
> HIVE-19166.06.patch, HIVE-19166.09.patch, HIVE-19166.1.patch, 
> HIVE-19166.10.patch, HIVE-19166.11.patch, HIVE-19166.12.patch, 
> HIVE-19166.2.patch, HIVE-19166.3.patch
>
>
> Broken by HIVE-18715



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19166) TestMiniLlapLocalCliDriver sysdb failure

2018-07-13 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-19166:
--
Attachment: (was: HIVE-19166.12.patch)

> TestMiniLlapLocalCliDriver sysdb failure
> 
>
> Key: HIVE-19166
> URL: https://issues.apache.org/jira/browse/HIVE-19166
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Vineet Garg
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19166.04.patch, HIVE-19166.05.patch, 
> HIVE-19166.06.patch, HIVE-19166.09.patch, HIVE-19166.1.patch, 
> HIVE-19166.10.patch, HIVE-19166.11.patch, HIVE-19166.12.patch, 
> HIVE-19166.2.patch, HIVE-19166.3.patch
>
>
> Broken by HIVE-18715



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20117) schema changes for txn stats

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16544057#comment-16544057
 ] 

Hive QA commented on HIVE-20117:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931414/HIVE-20117.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12601/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12601/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12601/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-07-14 05:54:22.376
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-12601/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-07-14 05:54:22.379
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   537c9cb..ab9e954  master -> origin/master
+ git reset --hard HEAD
HEAD is now at 537c9cb HIVE-20135: Fix incompatible change in 
TimestampColumnVector to default (Jesus Camacho Rodriguez, reviewed by Owen 
O'Malley)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 3 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at ab9e954 HIVE-20090 : Extend creation of semijoin reduction 
filters to be able to discover new opportunities (Jesus Camacho Rodriguez via 
Deepak Jaiswal)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-07-14 05:54:24.167
+ rm -rf ../yetus_PreCommit-HIVE-Build-12601
+ mkdir ../yetus_PreCommit-HIVE-Build-12601
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-12601
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12601/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: standalone-metastore/src/main/sql/derby/hive-schema-3.1.0.derby.sql: 
does not exist in index
error: 
standalone-metastore/src/main/sql/derby/upgrade-3.0.0-to-3.1.0.derby.sql: does 
not exist in index
error: standalone-metastore/src/main/sql/mssql/hive-schema-3.1.0.mssql.sql: 
does not exist in index
error: 
standalone-metastore/src/main/sql/mssql/upgrade-3.0.0-to-3.1.0.mssql.sql: does 
not exist in index
error: standalone-metastore/src/main/sql/mysql/hive-schema-3.1.0.mysql.sql: 
does not exist in index
error: 
standalone-metastore/src/main/sql/mysql/upgrade-3.0.0-to-3.1.0.mysql.sql: does 
not exist in index
error: standalone-metastore/src/main/sql/oracle/hive-schema-3.1.0.oracle.sql: 
does not exist in index
error: 
standalone-metastore/src/main/sql/oracle/upgrade-3.0.0-to-3.1.0.oracle.sql: 
does not exist in index
error: 
standalone-metastore/src/main/sql/postgres/hive-schema-3.1.0.postgres.sql: does 
not exist in index
error: 
standalone-metastore/src/main/sql/postgres/upgrade-3.0.0-to-3.1.0.postgres.sql: 
does not exist in index
error: src/main/sql/derby/hive-schema-3.1.0.derby.sql: does not exist in index
error: src/main/sql/derby/upgrade-3.0.0-to-3.1.0.derby.sql: does not exist in 
index
error: src/main/sql/mssql/hive-schema-3.1.0.mssql.sql: does not exist in index
error: src/main/sql/mssql/upgrade-3.0.0-to-3.1.0.mssql.sql: does not exist in 
index
error: src/main/sql/mysql/hive-schema-3.1.0.mysql.sql: does not exist in index
error: src/main/sql/mysql/upgrade-3.0.0-to-3.1.0.mysql.sql: does not exist in 
index
error: src/main/sql/oracle/hive-schema-3.1.0.oracle.sql: does not exist in index
error: src/main/sql/oracle/upgrade-3.0.0-to-3.1.0.oracle.sql: does not exist in 
index
error: src/main/sql/postgres/hiv

[jira] [Commented] (HIVE-20006) Make materializations invalidation cache work with multiple active remote metastores

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16544043#comment-16544043
 ] 

Hive QA commented on HIVE-20006:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931576/HIVE-20006.07.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14648 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12600/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12600/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12600/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931576 - PreCommit-HIVE-Build

> Make materializations invalidation cache work with multiple active remote 
> metastores
> 
>
> Key: HIVE-20006
> URL: https://issues.apache.org/jira/browse/HIVE-20006
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-19027.01.patch, HIVE-19027.02.patch, 
> HIVE-19027.03.patch, HIVE-19027.04.patch, HIVE-20006.01.patch, 
> HIVE-20006.02.patch, HIVE-20006.03.patch, HIVE-20006.04.patch, 
> HIVE-20006.05.patch, HIVE-20006.06.patch, HIVE-20006.07.patch, 
> HIVE-20006.patch
>
>
> The main points:
>  - Only MVs that use transactional tables and are stored in transactional 
> tables can have a time window value of 0. Those are the only MVs that can be 
> guaranteed to not be outdated when a query is executed.
>  - For MVs that +cannot be outdated+, comparison is based on valid write id 
> lists.
>  - For MVs that +can be outdated+:
>  ** The window for valid outdated MVs can be specified in intervals of 1 
> minute.
>  ** A materialized view is outdated if it was built before that time window 
> and any source table has been modified since.
> A time window of -1 means to always use the materialized view for rewriting 
> without any checks concerning its validity. If a materialized view uses an 
> external table, the only way to trigger the rewriting would be to set the 
> property to -1, since currently we do not capture for validation purposes 
> whether the external source tables have been modified since the MV was 
> created or not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19166) TestMiniLlapLocalCliDriver sysdb failure

2018-07-13 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-19166:
--
Attachment: HIVE-19166.12.patch

> TestMiniLlapLocalCliDriver sysdb failure
> 
>
> Key: HIVE-19166
> URL: https://issues.apache.org/jira/browse/HIVE-19166
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Vineet Garg
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19166.04.patch, HIVE-19166.05.patch, 
> HIVE-19166.06.patch, HIVE-19166.09.patch, HIVE-19166.1.patch, 
> HIVE-19166.10.patch, HIVE-19166.11.patch, HIVE-19166.12.patch, 
> HIVE-19166.2.patch, HIVE-19166.3.patch
>
>
> Broken by HIVE-18715



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20006) Make materializations invalidation cache work with multiple active remote metastores

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16544022#comment-16544022
 ] 

Hive QA commented on HIVE-20006:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} common in master has 64 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
53s{color} | {color:blue} standalone-metastore/metastore-common in master has 
217 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
45s{color} | {color:blue} ql in master has 2289 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} common: The patch generated 1 new + 422 unchanged - 5 
fixed = 423 total (was 427) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 4 new + 380 unchanged - 4 
fixed = 384 total (was 384) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} root: The patch generated 1 new + 422 unchanged - 5 
fixed = 423 total (was 427) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m  
3s{color} | {color:red} standalone-metastore/metastore-common generated 3 new + 
215 unchanged - 2 fixed = 218 total (was 217) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
54s{color} | {color:red} ql generated 1 new + 2288 unchanged - 1 fixed = 2289 
total (was 2289) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} standalone-metastore_metastore-common generated 0 
new + 53 unchanged - 1 fixed = 53 total (was 54) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
6s{color} | {color:green} root generated 0 new + 370 unchanged - 1 fixed = 370 
total (was 371) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-common |
|  |  
org.apache.hadoop.hive.metastore.txn.TxnHandler.lockMaterializationRebuild(String,
 String, long) passes a nonconstant String to an execute or addBatch method on 
an SQL statement  At TxnHandler.java:nonconstant String to an execute or 
addBatch method on an SQL statement  At TxnHandler.java:[line 1785] |
|  |  
org.

[jira] [Updated] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities

2018-07-13 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-20090:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Jesus!

> Extend creation of semijoin reduction filters to be able to discover new 
> opportunities
> --
>
> Key: HIVE-20090
> URL: https://issues.apache.org/jira/browse/HIVE-20090
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20090.01.patch, HIVE-20090.02.patch, 
> HIVE-20090.04.patch, HIVE-20090.05.patch, HIVE-20090.06.patch, 
> HIVE-20090.07.patch, HIVE-20090.08.patch
>
>
> Assume the following plan:
> {noformat}
> TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9]
> TS[2] - RS[3] - JOIN[4] 
> TS[6] - RS[7] - JOIN[8]
> {noformat}
> Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, 
> i.e., input to join between both subplans.
> However, it may be useful to consider other possibilities too, e.g., reduced 
> by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important 
> when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would 
> create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not.
> This patch comprises two parts. First, it creates additional predicates when 
> possible. Secondly, it removes duplicate semijoin reduction 
> branches/predicates, e.g., if another semijoin that consumes the output of 
> the same expression already reduces a certain table scan operator (heuristic, 
> since this may not result in most efficient plan in all cases). Ultimately, 
> the decision on whether to use one or another should be cost-driven 
> (follow-up).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities

2018-07-13 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16544003#comment-16544003
 ] 

Ashutosh Chauhan commented on HIVE-20090:
-

+1

> Extend creation of semijoin reduction filters to be able to discover new 
> opportunities
> --
>
> Key: HIVE-20090
> URL: https://issues.apache.org/jira/browse/HIVE-20090
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20090.01.patch, HIVE-20090.02.patch, 
> HIVE-20090.04.patch, HIVE-20090.05.patch, HIVE-20090.06.patch, 
> HIVE-20090.07.patch, HIVE-20090.08.patch
>
>
> Assume the following plan:
> {noformat}
> TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9]
> TS[2] - RS[3] - JOIN[4] 
> TS[6] - RS[7] - JOIN[8]
> {noformat}
> Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, 
> i.e., input to join between both subplans.
> However, it may be useful to consider other possibilities too, e.g., reduced 
> by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important 
> when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would 
> create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not.
> This patch comprises two parts. First, it creates additional predicates when 
> possible. Secondly, it removes duplicate semijoin reduction 
> branches/predicates, e.g., if another semijoin that consumes the output of 
> the same expression already reduces a certain table scan operator (heuristic, 
> since this may not result in most efficient plan in all cases). Ultimately, 
> the decision on whether to use one or another should be cost-driven 
> (follow-up).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19886) Logs may be directed to 2 files if --hiveconf hive.log.file is used

2018-07-13 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-19886:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Jaume!

> Logs may be directed to 2 files if --hiveconf hive.log.file is used
> ---
>
> Key: HIVE-19886
> URL: https://issues.apache.org/jira/browse/HIVE-19886
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Jaume M
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19886.2.patch, HIVE-19886.2.patch, 
> HIVE-19886.3.patch, HIVE-19886.4.patch, HIVE-19886.patch
>
>
> hive launch script explicitly specific log4j2 configuration file to use. The 
> main() methods in HiveServer2 and HiveMetastore reconfigures the logger based 
> on user input via --hiveconf hive.log.file. This may cause logs to end up in 
> 2 different files. Initial logs goes to the file specified in 
> hive-log4j2.properties and after logger reconfiguration the rest of the logs 
> goes to the file specified via --hiveconf hive.log.file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20111) HBase-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20111:
---
   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> HBase-Hive (managed) table creation fails with strict managed table checks: 
> Table is marked as a managed table but is not transactional
> ---
>
> Key: HIVE-20111
> URL: https://issues.apache.org/jira/browse/HIVE-20111
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, StorageHandler
>Affects Versions: 3.0.0
>Reporter: Romil Choksi
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20111.01.patch, HIVE-20111.02.patch, 
> HIVE-20111.03.patch, HIVE-20111.04.patch
>
>
> Similar to HIVE-20085. HBase-Hive (managed) table creation fails with strict 
> managed table checks: Table is marked as a managed table but is not 
> transactional



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20111) HBase-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543995#comment-16543995
 ] 

Hive QA commented on HIVE-20111:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931577/HIVE-20111.04.patch

{color:green}SUCCESS:{color} +1 due to 26 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14650 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12599/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12599/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12599/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931577 - PreCommit-HIVE-Build

> HBase-Hive (managed) table creation fails with strict managed table checks: 
> Table is marked as a managed table but is not transactional
> ---
>
> Key: HIVE-20111
> URL: https://issues.apache.org/jira/browse/HIVE-20111
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, StorageHandler
>Affects Versions: 3.0.0
>Reporter: Romil Choksi
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-20111.01.patch, HIVE-20111.02.patch, 
> HIVE-20111.03.patch, HIVE-20111.04.patch
>
>
> Similar to HIVE-20085. HBase-Hive (managed) table creation fails with strict 
> managed table checks: Table is marked as a managed table but is not 
> transactional



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20135) Fix incompatible change in TimestampColumnVector to default to UTC

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543984#comment-16543984
 ] 

Jesus Camacho Rodriguez commented on HIVE-20135:


Pushed to master, branch-3, branch-3.1.

> Fix incompatible change in TimestampColumnVector to default to UTC
> --
>
> Key: HIVE-20135
> URL: https://issues.apache.org/jira/browse/HIVE-20135
> Project: Hive
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Jesus Camacho Rodriguez
>Priority: Blocker
> Fix For: 3.1.0, 4.0.0, storage-2.7.0
>
> Attachments: HIVE-20135.01.patch, HIVE-20135.02.patch, 
> HIVE-20135.03.patch, HIVE-20135.patch
>
>
> HIVE-20007 changed the default for TimestampColumnVector to be to use UTC, 
> which breaks the API compatibility with storage-api 2.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20135) Fix incompatible change in TimestampColumnVector to default to UTC

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20135:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix incompatible change in TimestampColumnVector to default to UTC
> --
>
> Key: HIVE-20135
> URL: https://issues.apache.org/jira/browse/HIVE-20135
> Project: Hive
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Jesus Camacho Rodriguez
>Priority: Blocker
> Fix For: 3.1.0, 4.0.0, storage-2.7.0
>
> Attachments: HIVE-20135.01.patch, HIVE-20135.02.patch, 
> HIVE-20135.03.patch, HIVE-20135.patch
>
>
> HIVE-20007 changed the default for TimestampColumnVector to be to use UTC, 
> which breaks the API compatibility with storage-api 2.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20111) HBase-Hive (managed) table creation fails with strict managed table checks: Table is marked as a managed table but is not transactional

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543979#comment-16543979
 ] 

Hive QA commented on HIVE-20111:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12599/dev-support/hive-personality.sh
 |
| git revision | master / 9c5c940 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12599/yetus/whitespace-eol.txt
 |
| modules | C: hbase-handler U: hbase-handler |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12599/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HBase-Hive (managed) table creation fails with strict managed table checks: 
> Table is marked as a managed table but is not transactional
> ---
>
> Key: HIVE-20111
> URL: https://issues.apache.org/jira/browse/HIVE-20111
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, StorageHandler
>Affects Versions: 3.0.0
>Reporter: Romil Choksi
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-20111.01.patch, HIVE-20111.02.patch, 
> HIVE-20111.03.patch, HIVE-20111.04.patch
>
>
> Similar to HIVE-20085. HBase-Hive (managed) table creation fails with strict 
> managed table checks: Table is marked as a managed table but is not 
> transactional



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20135) Fix incompatible change in TimestampColumnVector to default to UTC

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543975#comment-16543975
 ] 

Hive QA commented on HIVE-20135:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931572/HIVE-20135.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14650 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12598/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12598/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12598/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931572 - PreCommit-HIVE-Build

> Fix incompatible change in TimestampColumnVector to default to UTC
> --
>
> Key: HIVE-20135
> URL: https://issues.apache.org/jira/browse/HIVE-20135
> Project: Hive
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Jesus Camacho Rodriguez
>Priority: Blocker
> Fix For: 3.1.0, 4.0.0, storage-2.7.0
>
> Attachments: HIVE-20135.01.patch, HIVE-20135.02.patch, 
> HIVE-20135.03.patch, HIVE-20135.patch
>
>
> HIVE-20007 changed the default for TimestampColumnVector to be to use UTC, 
> which breaks the API compatibility with storage-api 2.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20135) Fix incompatible change in TimestampColumnVector to default to UTC

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543956#comment-16543956
 ] 

Hive QA commented on HIVE-20135:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} storage-api in master has 48 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12598/dev-support/hive-personality.sh
 |
| git revision | master / 9c5c940 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: storage-api U: storage-api |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12598/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix incompatible change in TimestampColumnVector to default to UTC
> --
>
> Key: HIVE-20135
> URL: https://issues.apache.org/jira/browse/HIVE-20135
> Project: Hive
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Jesus Camacho Rodriguez
>Priority: Blocker
> Fix For: 3.1.0, 4.0.0, storage-2.7.0
>
> Attachments: HIVE-20135.01.patch, HIVE-20135.02.patch, 
> HIVE-20135.03.patch, HIVE-20135.patch
>
>
> HIVE-20007 changed the default for TimestampColumnVector to be to use UTC, 
> which breaks the API compatibility with storage-api 2.6.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543950#comment-16543950
 ] 

Jesus Camacho Rodriguez commented on HIVE-20090:


All tests passed. Cc [~ashutoshc] [~djaiswal]

> Extend creation of semijoin reduction filters to be able to discover new 
> opportunities
> --
>
> Key: HIVE-20090
> URL: https://issues.apache.org/jira/browse/HIVE-20090
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20090.01.patch, HIVE-20090.02.patch, 
> HIVE-20090.04.patch, HIVE-20090.05.patch, HIVE-20090.06.patch, 
> HIVE-20090.07.patch, HIVE-20090.08.patch
>
>
> Assume the following plan:
> {noformat}
> TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9]
> TS[2] - RS[3] - JOIN[4] 
> TS[6] - RS[7] - JOIN[8]
> {noformat}
> Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, 
> i.e., input to join between both subplans.
> However, it may be useful to consider other possibilities too, e.g., reduced 
> by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important 
> when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would 
> create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not.
> This patch comprises two parts. First, it creates additional predicates when 
> possible. Secondly, it removes duplicate semijoin reduction 
> branches/predicates, e.g., if another semijoin that consumes the output of 
> the same expression already reduces a certain table scan operator (heuristic, 
> since this may not result in most efficient plan in all cases). Ultimately, 
> the decision on whether to use one or another should be cost-driven 
> (follow-up).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543948#comment-16543948
 ] 

Hive QA commented on HIVE-20090:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931571/HIVE-20090.08.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14651 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12597/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12597/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12597/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931571 - PreCommit-HIVE-Build

> Extend creation of semijoin reduction filters to be able to discover new 
> opportunities
> --
>
> Key: HIVE-20090
> URL: https://issues.apache.org/jira/browse/HIVE-20090
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20090.01.patch, HIVE-20090.02.patch, 
> HIVE-20090.04.patch, HIVE-20090.05.patch, HIVE-20090.06.patch, 
> HIVE-20090.07.patch, HIVE-20090.08.patch
>
>
> Assume the following plan:
> {noformat}
> TS[0] - RS[1] - JOIN[4] - RS[5] - JOIN[8] - FS[9]
> TS[2] - RS[3] - JOIN[4] 
> TS[6] - RS[7] - JOIN[8]
> {noformat}
> Currently, {{TS\[6\]}} may only be reduced with the output of {{RS\[5\]}}, 
> i.e., input to join between both subplans.
> However, it may be useful to consider other possibilities too, e.g., reduced 
> by the output of {{RS\[1\]}} or {{RS\[3\]}}. For instance, this is important 
> when, given a large plan, an edge between {{RS[5]}} and {{TS[0]}} would 
> create a cycle, while an edge between {{RS[1]}} and {{TS[6]}} would not.
> This patch comprises two parts. First, it creates additional predicates when 
> possible. Secondly, it removes duplicate semijoin reduction 
> branches/predicates, e.g., if another semijoin that consumes the output of 
> the same expression already reduces a certain table scan operator (heuristic, 
> since this may not result in most efficient plan in all cases). Ultimately, 
> the decision on whether to use one or another should be cost-driven 
> (follow-up).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch

2018-07-13 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543946#comment-16543946
 ] 

Sergey Shelukhin commented on HIVE-19532:
-

The diff after the latest merge and the big analyze/etc patch

> fix tests for master-txnstats branch
> 
>
> Key: HIVE-19532
> URL: https://issues.apache.org/jira/browse/HIVE-19532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, 
> HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, 
> HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.07.patch, 
> HIVE-19532.08.patch, HIVE-19532.09.patch, HIVE-19532.10.patch, 
> HIVE-19532.11.patch, HIVE-19532.12.patch, HIVE-19532.13.patch, 
> HIVE-19532.14.patch, HIVE-19532.15.patch, HIVE-19532.16.patch, 
> HIVE-19532.17.patch, HIVE-19532.18.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-20107) stats_part2.q fails

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-20107.
-
   Resolution: Fixed
Fix Version/s: (was: 4.0.0)
   txnstats

As part of the analyze/stats updater/strict checks patch

> stats_part2.q fails
> ---
>
> Key: HIVE-20107
> URL: https://issues.apache.org/jira/browse/HIVE-20107
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Steve Yeom
>Priority: Major
> Fix For: txnstats
>
>
> https://builds.apache.org/job/PreCommit-HIVE-Build/12425/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_stats_part2_/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19532) fix tests for master-txnstats branch

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19532:

Attachment: HIVE-19532.18.patch

> fix tests for master-txnstats branch
> 
>
> Key: HIVE-19532
> URL: https://issues.apache.org/jira/browse/HIVE-19532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, 
> HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, 
> HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.07.patch, 
> HIVE-19532.08.patch, HIVE-19532.09.patch, HIVE-19532.10.patch, 
> HIVE-19532.11.patch, HIVE-19532.12.patch, HIVE-19532.13.patch, 
> HIVE-19532.14.patch, HIVE-19532.15.patch, HIVE-19532.16.patch, 
> HIVE-19532.17.patch, HIVE-19532.18.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19820:

   Resolution: Fixed
Fix Version/s: txnstats
   Status: Resolved  (was: Patch Available)

Committed to branch. I've fixed all the failed tests that were not already 
failing on the branch, from the last run.
If I've broken more tests in process, they will be fixed together with the rest 
of the broken tests on the branch.

[~ekoifman] there's one problem that's kind of papered over (I made an 
exception in the code)...
Currently we generate write ID when converting to MM tables. However, if I add 
the same path (generate write ID in Driver acquireLocks using the normal 
process) when converting to full ACID tables, some tests fail because it 
generates a normal write ID instead of the magical write ID that is currently 
used (1001 or smth like that) for conversion.
Why cannot full ACID conversion use normal write IDs? If not, where is this 
magic write ID generated, and can we instead create it via normal means in 
acquireLocks?

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: txnstats
>
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.05.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.04.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-20081) remove EnvironmentContext usage and add proper request APIs

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-20081.
-
   Resolution: Fixed
Fix Version/s: txnstats

> remove EnvironmentContext usage and add proper request APIs
> ---
>
> Key: HIVE-20081
> URL: https://issues.apache.org/jira/browse/HIVE-20081
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: txnstats
>
>
> Optional, since because of old unrelated changes we cannot entirely get rid 
> of EnvironmentContext.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543929#comment-16543929
 ] 

Sergey Shelukhin commented on HIVE-19820:
-

Fixed the remaining issues with obscure commands and scenarios (alter partition 
update stats, adding extra props to an ACID table), and updated some out files

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.05.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.04.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19820:

Attachment: (was: HIVE-19820.01.patch)

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.05.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.04.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19820:

Attachment: branch-19820.04.nogen.patch

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.05.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.04.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19820:

Attachment: (was: branch-19820.nogen.patch)

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.05.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.04.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19820:

Attachment: (was: HIVE-19820.04.patch)

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.05.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.04.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19820:

Attachment: HIVE-19820.05.patch

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.05.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.04.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19820:

Attachment: (was: HIVE-19820.patch)

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.02-master-txnstats.patch, HIVE-19820.03-master-txnstats.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.05.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.04.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20090) Extend creation of semijoin reduction filters to be able to discover new opportunities

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543917#comment-16543917
 ] 

Hive QA commented on HIVE-20090:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} common in master has 64 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} ql in master has 2289 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 17 new + 35 unchanged - 9 
fixed = 52 total (was 44) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
2s{color} | {color:red} ql generated 2 new + 2289 unchanged - 0 fixed = 2291 
total (was 2289) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Should org.apache.hadoop.hive.ql.parse.TezCompiler$SemiJoinRemovalContext 
be a _static_ inner class?  At TezCompiler.java:inner class?  At 
TezCompiler.java:[lines 1224-1231] |
|  |  Should org.apache.hadoop.hive.ql.parse.TezCompiler$SemiJoinRemovalProc be 
a _static_ inner class?  At TezCompiler.java:inner class?  At 
TezCompiler.java:[lines 1022-1134] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12597/dev-support/hive-personality.sh
 |
| git revision | master / 9c5c940 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12597/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12597/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12597/yetus/new-findbugs-ql.html
 |
| modules | C: common ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12597/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Extend creation of semijoin reduction filters to be able to discover new 
> opportunities
> --
>
> Key: HIVE-20090
> URL: https://issues.apache.org/jira/browse/HIVE-20090
> Project: Hive
>  Issue Type: Im

[jira] [Updated] (HIVE-20164) Murmur Hash : Make sure CTAS and IAS use correct bucketing version

2018-07-13 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-20164:
--
Attachment: HIVE-20164.2.patch

> Murmur Hash : Make sure CTAS and IAS use correct bucketing version
> --
>
> Key: HIVE-20164
> URL: https://issues.apache.org/jira/browse/HIVE-20164
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20164.1.patch, HIVE-20164.2.patch
>
>
> With the migration to Murmur hash, CTAS and IAS from old table version to new 
> table version does not work as intended and data is hashed using old hash 
> logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19408) Improve show materialized views statement to show more information about invalidation

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19408:
---
Attachment: HIVE-19408.patch

> Improve show materialized views statement to show more information about 
> invalidation
> -
>
> Key: HIVE-19408
> URL: https://issues.apache.org/jira/browse/HIVE-19408
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-19408.patch
>
>
> We should show more useful information in addition to materialized view name. 
> For instance, information about whether the materialized view contents are 
> up-to-date or not, and which table(s) have changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19408) Improve show materialized views statement to show more information about invalidation

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19408:
---
Status: Patch Available  (was: In Progress)

> Improve show materialized views statement to show more information about 
> invalidation
> -
>
> Key: HIVE-19408
> URL: https://issues.apache.org/jira/browse/HIVE-19408
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
>
> We should show more useful information in addition to materialized view name. 
> For instance, information about whether the materialized view contents are 
> up-to-date or not, and which table(s) have changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-19408) Improve show materialized views statement to show more information about invalidation

2018-07-13 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-19408 started by Jesus Camacho Rodriguez.
--
> Improve show materialized views statement to show more information about 
> invalidation
> -
>
> Key: HIVE-19408
> URL: https://issues.apache.org/jira/browse/HIVE-19408
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
>
> We should show more useful information in addition to materialized view name. 
> For instance, information about whether the materialized view contents are 
> up-to-date or not, and which table(s) have changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19886) Logs may be directed to 2 files if --hiveconf hive.log.file is used

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543903#comment-16543903
 ] 

Hive QA commented on HIVE-19886:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931564/HIVE-19886.4.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14650 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12596/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12596/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12596/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931564 - PreCommit-HIVE-Build

> Logs may be directed to 2 files if --hiveconf hive.log.file is used
> ---
>
> Key: HIVE-19886
> URL: https://issues.apache.org/jira/browse/HIVE-19886
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Jaume M
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19886.2.patch, HIVE-19886.2.patch, 
> HIVE-19886.3.patch, HIVE-19886.4.patch, HIVE-19886.patch
>
>
> hive launch script explicitly specific log4j2 configuration file to use. The 
> main() methods in HiveServer2 and HiveMetastore reconfigures the logger based 
> on user input via --hiveconf hive.log.file. This may cause logs to end up in 
> 2 different files. Initial logs goes to the file specified in 
> hive-log4j2.properties and after logger reconfiguration the rest of the logs 
> goes to the file specified via --hiveconf hive.log.file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19886) Logs may be directed to 2 files if --hiveconf hive.log.file is used

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543882#comment-16543882
 ] 

Hive QA commented on HIVE-19886:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} service in master has 48 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} service: The patch generated 0 new + 39 unchanged - 
1 fixed = 39 total (was 40) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12596/dev-support/hive-personality.sh
 |
| git revision | master / 9c5c940 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: service U: service |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12596/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Logs may be directed to 2 files if --hiveconf hive.log.file is used
> ---
>
> Key: HIVE-19886
> URL: https://issues.apache.org/jira/browse/HIVE-19886
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Jaume M
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19886.2.patch, HIVE-19886.2.patch, 
> HIVE-19886.3.patch, HIVE-19886.4.patch, HIVE-19886.patch
>
>
> hive launch script explicitly specific log4j2 configuration file to use. The 
> main() methods in HiveServer2 and HiveMetastore reconfigures the logger based 
> on user input via --hiveconf hive.log.file. This may cause logs to end up in 
> 2 different files. Initial logs goes to the file specified in 
> hive-log4j2.properties and after logger reconfiguration the rest of the logs 
> goes to the file specified via --hiveconf hive.log.file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20152) reset db state, when repl dump fails, so rename table can be done

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543877#comment-16543877
 ] 

Hive QA commented on HIVE-20152:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931470/HIVE-20152.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14650 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12595/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12595/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12595/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931470 - PreCommit-HIVE-Build

> reset db state, when repl dump fails, so rename table can be done
> -
>
> Key: HIVE-20152
> URL: https://issues.apache.org/jira/browse/HIVE-20152
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HIVE-20152.1.patch
>
>
> If a repl dump command is run and it fails for some reason while doing table 
> level dumps, the state set on the db parameters is not reset and hence no 
> table / partition renames can be done. 
> the property to be reset is prefixed with key {code}bootstrap.dump.state 
> {code}
> and it should be unset. meanwhile the workaround is 
> {code}
> describe database extended [db_name]; 
> {code}
> assuming property is 'bootstrap.dump.state.something'
> {code}
> alter  database [db_name] set dbproperties 
> ('bootstrap.dump.state.something'='idle');"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20174) Vectorization: Fix NULL / Wrong Results issues in GROUP BY Aggregation Functions

2018-07-13 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-20174:

Attachment: HIVE-20174.01.patch

> Vectorization: Fix NULL / Wrong Results issues in GROUP BY Aggregation 
> Functions
> 
>
> Key: HIVE-20174
> URL: https://issues.apache.org/jira/browse/HIVE-20174
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-20174.01.patch
>
>
> Write new UT tests that use random data and intentional isRepeating batches 
> to checks for NULL and Wrong Results for vectorized aggregation functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20174) Vectorization: Fix NULL / Wrong Results issues in GROUP BY Aggregation Functions

2018-07-13 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-20174:

Status: Patch Available  (was: Open)

> Vectorization: Fix NULL / Wrong Results issues in GROUP BY Aggregation 
> Functions
> 
>
> Key: HIVE-20174
> URL: https://issues.apache.org/jira/browse/HIVE-20174
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-20174.01.patch
>
>
> Write new UT tests that use random data and intentional isRepeating batches 
> to checks for NULL and Wrong Results for vectorized aggregation functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20174) Vectorization: Fix NULL / Wrong Results issues in GROUP BY Aggregation Functions

2018-07-13 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-20174:

Description: Write new UT tests that use random data and intentional 
isRepeating batches to checks for NULL and Wrong Results for vectorized 
aggregation functions.  (was: Write new UT tests that use random data and 
intentional isRepeating batches to checks for NULL and Wrong Results for 
vectorized aggregation functions:)

> Vectorization: Fix NULL / Wrong Results issues in GROUP BY Aggregation 
> Functions
> 
>
> Key: HIVE-20174
> URL: https://issues.apache.org/jira/browse/HIVE-20174
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
>
> Write new UT tests that use random data and intentional isRepeating batches 
> to checks for NULL and Wrong Results for vectorized aggregation functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20152) reset db state, when repl dump fails, so rename table can be done

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543859#comment-16543859
 ] 

Hive QA commented on HIVE-20152:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
53s{color} | {color:blue} ql in master has 2289 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
36s{color} | {color:red} ql: The patch generated 5 new + 14 unchanged - 0 fixed 
= 19 total (was 14) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} ql generated 0 new + 2288 unchanged - 1 fixed = 2288 
total (was 2289) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12595/dev-support/hive-personality.sh
 |
| git revision | master / 9c5c940 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12595/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12595/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12595/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> reset db state, when repl dump fails, so rename table can be done
> -
>
> Key: HIVE-20152
> URL: https://issues.apache.org/jira/browse/HIVE-20152
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HIVE-20152.1.patch
>
>
> If a repl dump command is run and it fails for some reason while doing table 
> level dumps, the state set on the db parameters is not reset and hence no 
> table / partition renames can be done. 
> the property to be reset is prefixed with key {code}bootstrap.dump.state 
> {code}
> and it should be unset. meanwhile the workaround is 
> {code}
> describe database extended [db_name]; 
> {code}
> assuming property is 'bootstrap.dump.state.something'
> {code}
> alter  database [db_name] set dbproperties 
> ('bootstrap.dump.state.something'='idle');"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20174) Vectorization: Fix NULL / Wrong Results issues in GROUP BY Aggregation Functions

2018-07-13 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-20174:
---


> Vectorization: Fix NULL / Wrong Results issues in GROUP BY Aggregation 
> Functions
> 
>
> Key: HIVE-20174
> URL: https://issues.apache.org/jira/browse/HIVE-20174
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
>
> Write new UT tests that use random data and intentional isRepeating batches 
> to checks for NULL and Wrong Results for vectorized aggregation functions:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19809) Remove Deprecated Code From Utilities Class

2018-07-13 Thread Aihua Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543846#comment-16543846
 ] 

Aihua Xu commented on HIVE-19809:
-

[~belugabehr] Actually can you reattach the patch to trigger the precommit 
build? Right now we try to get a clean build with no test failures before 
commit. Of course I don't think it's related to your change.

> Remove Deprecated Code From Utilities Class
> ---
>
> Key: HIVE-19809
> URL: https://issues.apache.org/jira/browse/HIVE-19809
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-19809.1.patch
>
>
> {quote}
> This can go away once hive moves to support only JDK 7  and can use 
> Files.createTempDirectory
> {quote}
> Remove the {{createTempDir}} method from the {{Utilities}} class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20165) Enable ZLIB for streaming ingest

2018-07-13 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20165:
-
Attachment: HIVE-20165.2.patch

> Enable ZLIB for streaming ingest
> 
>
> Key: HIVE-20165
> URL: https://issues.apache.org/jira/browse/HIVE-20165
> Project: Hive
>  Issue Type: Bug
>  Components: Streaming, Transactions
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20165.1.patch, HIVE-20165.2.patch
>
>
> Per [~gopalv]'s recommendation tried running streaming ingest with and 
> without zlib. Following are the numbers
>  
>  *Compression: NONE*
>  Total rows committed: 9380
>  Throughput: *156* rows/second
> $ hdfs dfs -du -s -h /apps/hive/warehouse/prasanth.db/culvert
>  *14.1 G*  /apps/hive/warehouse/prasanth.db/culvert
>   
>  *Compression: ZLIB*
>  Total rows committed: 9210
>  Throughput: *1535000* rows/second
> $ hdfs dfs -du -s -h /apps/hive/warehouse/prasanth.db/culvert
>  *7.4 G*  /apps/hive/warehouse/prasanth.db/culvert
>   
>  ZLIB is getting us 2x compression and only 2% lesser throughput. We should 
> enable ZLIB by default for streaming ingest. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20165) Enable ZLIB for streaming ingest

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543824#comment-16543824
 ] 

Hive QA commented on HIVE-20165:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931441/HIVE-20165.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14650 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.streaming.TestStreaming.testFileDumpDeltaFilesWithStreamingOptimizations
 (batchId=319)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12593/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12593/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12593/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931441 - PreCommit-HIVE-Build

> Enable ZLIB for streaming ingest
> 
>
> Key: HIVE-20165
> URL: https://issues.apache.org/jira/browse/HIVE-20165
> Project: Hive
>  Issue Type: Bug
>  Components: Streaming, Transactions
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20165.1.patch
>
>
> Per [~gopalv]'s recommendation tried running streaming ingest with and 
> without zlib. Following are the numbers
>  
>  *Compression: NONE*
>  Total rows committed: 9380
>  Throughput: *156* rows/second
> $ hdfs dfs -du -s -h /apps/hive/warehouse/prasanth.db/culvert
>  *14.1 G*  /apps/hive/warehouse/prasanth.db/culvert
>   
>  *Compression: ZLIB*
>  Total rows committed: 9210
>  Throughput: *1535000* rows/second
> $ hdfs dfs -du -s -h /apps/hive/warehouse/prasanth.db/culvert
>  *7.4 G*  /apps/hive/warehouse/prasanth.db/culvert
>   
>  ZLIB is getting us 2x compression and only 2% lesser throughput. We should 
> enable ZLIB by default for streaming ingest. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543814#comment-16543814
 ] 

Sergey Shelukhin commented on HIVE-19820:
-

Fixed the issue where merge was not working correctly due to isCompliant flag, 
and also changed merge to be more strict for txn stats (in particular, always 
set stats to invalid if stats cannot be merged)

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.01.patch, HIVE-19820.02-master-txnstats.patch, 
> HIVE-19820.03-master-txnstats.patch, HIVE-19820.04-master-txnstats.patch, 
> HIVE-19820.04.patch, HIVE-19820.patch, branch-19820.02.nogen.patch, 
> branch-19820.03.nogen.patch, branch-19820.nogen.patch, 
> branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19820:

Attachment: (was: HIVE-19820.03.patch)

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.01.patch, HIVE-19820.02-master-txnstats.patch, 
> HIVE-19820.03-master-txnstats.patch, HIVE-19820.04-master-txnstats.patch, 
> HIVE-19820.04.patch, HIVE-19820.patch, branch-19820.02.nogen.patch, 
> branch-19820.03.nogen.patch, branch-19820.nogen.patch, 
> branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19820:

Attachment: HIVE-19820.04.patch

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.01.patch, HIVE-19820.02-master-txnstats.patch, 
> HIVE-19820.03-master-txnstats.patch, HIVE-19820.04-master-txnstats.patch, 
> HIVE-19820.04.patch, HIVE-19820.patch, branch-19820.02.nogen.patch, 
> branch-19820.03.nogen.patch, branch-19820.nogen.patch, 
> branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19668) Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and duplicate strings

2018-07-13 Thread Misha Dmitriev (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HIVE-19668:
--
Status: Patch Available  (was: In Progress)

The previous patch may or may not have been applied, thus I've just updated my 
local git repo clone and generated a new patch file.

> Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and 
> duplicate strings
> --
>
> Key: HIVE-19668
> URL: https://issues.apache.org/jira/browse/HIVE-19668
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: HIVE-19668.01.patch, HIVE-19668.02.patch, 
> HIVE-19668.03.patch, HIVE-19668.04.patch, image-2018-05-22-17-41-39-572.png
>
>
> I've recently analyzed a HS2 heap dump, obtained when there was a huge memory 
> spike during compilation of some big query. The analysis was done with jxray 
> ([www.jxray.com).|http://www.jxray.com)./] It turns out that more than 90% of 
> the 20G heap was used by data structures associated with query parsing 
> ({{org.apache.hadoop.hive.ql.parse.QBExpr}}). There are probably multiple 
> opportunities for optimizations here. One of them is to stop the code from 
> creating duplicate instances of {{org.antlr.runtime.CommonToken}} class. See 
> a sample of these objects in the attached image:
> !image-2018-05-22-17-41-39-572.png|width=879,height=399!
> Looks like these particular {{CommonToken}} objects are constants, that don't 
> change once created. I see some code, e.g. in 
> {{org.apache.hadoop.hive.ql.parse.CalcitePlanner}}, where such objects are 
> apparently repeatedly created with e.g. {{new 
> CommonToken(HiveParser.TOK_INSERT, "TOK_INSERT")}} If these 33 token kinds 
> are instead created once and reused, we will save more than 1/10th of the 
> heap in this scenario. Plus, since these objects are small but very numerous, 
> getting rid of them will remove a gread deal of pressure from the GC.
> Another source of waste are duplicate strings, that collectively waste 26.1% 
> of memory. Some of them come from CommonToken objects that have the same text 
> (i.e. for multiple CommonToken objects the contents of their 'text' Strings 
> are the same, but each has its own copy of that String). Other duplicate 
> strings come from other sources, that are easy enough to fix by adding 
> String.intern() calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19668) Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and duplicate strings

2018-07-13 Thread Misha Dmitriev (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HIVE-19668:
--
Attachment: HIVE-19668.04.patch

> Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and 
> duplicate strings
> --
>
> Key: HIVE-19668
> URL: https://issues.apache.org/jira/browse/HIVE-19668
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: HIVE-19668.01.patch, HIVE-19668.02.patch, 
> HIVE-19668.03.patch, HIVE-19668.04.patch, image-2018-05-22-17-41-39-572.png
>
>
> I've recently analyzed a HS2 heap dump, obtained when there was a huge memory 
> spike during compilation of some big query. The analysis was done with jxray 
> ([www.jxray.com).|http://www.jxray.com)./] It turns out that more than 90% of 
> the 20G heap was used by data structures associated with query parsing 
> ({{org.apache.hadoop.hive.ql.parse.QBExpr}}). There are probably multiple 
> opportunities for optimizations here. One of them is to stop the code from 
> creating duplicate instances of {{org.antlr.runtime.CommonToken}} class. See 
> a sample of these objects in the attached image:
> !image-2018-05-22-17-41-39-572.png|width=879,height=399!
> Looks like these particular {{CommonToken}} objects are constants, that don't 
> change once created. I see some code, e.g. in 
> {{org.apache.hadoop.hive.ql.parse.CalcitePlanner}}, where such objects are 
> apparently repeatedly created with e.g. {{new 
> CommonToken(HiveParser.TOK_INSERT, "TOK_INSERT")}} If these 33 token kinds 
> are instead created once and reused, we will save more than 1/10th of the 
> heap in this scenario. Plus, since these objects are small but very numerous, 
> getting rid of them will remove a gread deal of pressure from the GC.
> Another source of waste are duplicate strings, that collectively waste 26.1% 
> of memory. Some of them come from CommonToken objects that have the same text 
> (i.e. for multiple CommonToken objects the contents of their 'text' Strings 
> are the same, but each has its own copy of that String). Other duplicate 
> strings come from other sources, that are easy enough to fix by adding 
> String.intern() calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19668) Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and duplicate strings

2018-07-13 Thread Misha Dmitriev (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated HIVE-19668:
--
Status: In Progress  (was: Patch Available)

> Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and 
> duplicate strings
> --
>
> Key: HIVE-19668
> URL: https://issues.apache.org/jira/browse/HIVE-19668
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: HIVE-19668.01.patch, HIVE-19668.02.patch, 
> HIVE-19668.03.patch, HIVE-19668.04.patch, image-2018-05-22-17-41-39-572.png
>
>
> I've recently analyzed a HS2 heap dump, obtained when there was a huge memory 
> spike during compilation of some big query. The analysis was done with jxray 
> ([www.jxray.com).|http://www.jxray.com)./] It turns out that more than 90% of 
> the 20G heap was used by data structures associated with query parsing 
> ({{org.apache.hadoop.hive.ql.parse.QBExpr}}). There are probably multiple 
> opportunities for optimizations here. One of them is to stop the code from 
> creating duplicate instances of {{org.antlr.runtime.CommonToken}} class. See 
> a sample of these objects in the attached image:
> !image-2018-05-22-17-41-39-572.png|width=879,height=399!
> Looks like these particular {{CommonToken}} objects are constants, that don't 
> change once created. I see some code, e.g. in 
> {{org.apache.hadoop.hive.ql.parse.CalcitePlanner}}, where such objects are 
> apparently repeatedly created with e.g. {{new 
> CommonToken(HiveParser.TOK_INSERT, "TOK_INSERT")}} If these 33 token kinds 
> are instead created once and reused, we will save more than 1/10th of the 
> heap in this scenario. Plus, since these objects are small but very numerous, 
> getting rid of them will remove a gread deal of pressure from the GC.
> Another source of waste are duplicate strings, that collectively waste 26.1% 
> of memory. Some of them come from CommonToken objects that have the same text 
> (i.e. for multiple CommonToken objects the contents of their 'text' Strings 
> are the same, but each has its own copy of that String). Other duplicate 
> strings come from other sources, that are easy enough to fix by adding 
> String.intern() calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20165) Enable ZLIB for streaming ingest

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543746#comment-16543746
 ] 

Hive QA commented on HIVE-20165:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 2289 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12593/dev-support/hive-personality.sh
 |
| git revision | master / 9c5c940 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql streaming U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12593/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Enable ZLIB for streaming ingest
> 
>
> Key: HIVE-20165
> URL: https://issues.apache.org/jira/browse/HIVE-20165
> Project: Hive
>  Issue Type: Bug
>  Components: Streaming, Transactions
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20165.1.patch
>
>
> Per [~gopalv]'s recommendation tried running streaming ingest with and 
> without zlib. Following are the numbers
>  
>  *Compression: NONE*
>  Total rows committed: 9380
>  Throughput: *156* rows/second
> $ hdfs dfs -du -s -h /apps/hive/warehouse/prasanth.db/culvert
>  *14.1 G*  /apps/hive/warehouse/prasanth.db/culvert
>   
>  *Compression: ZLIB*
>  Total rows committed: 9210
>  Throughput: *1535000* rows/second
> $ hdfs dfs -du -s -h /apps/hive/warehouse/prasanth.db/culvert
>  *7.4 G*  /apps/hive/warehouse/prasanth.db/culvert
>   
>  ZLIB is getting us 2x compression and only 2% lesser throughput. We should 
> enable ZLIB by default for streaming ingest. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20172) StatsUpdater failed with GSS Exception while trying to connect to remote metastore

2018-07-13 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543730#comment-16543730
 ] 

ASF GitHub Bot commented on HIVE-20172:
---

GitHub user rajkrrsingh opened a pull request:

https://github.com/apache/hive/pull/400

HIVE-20172: StatsUpdater failed with GSS Exception while trying to co…

since metastore client is running in HMS so there is no need to connect to 
remote URI, so a part of this PR I will be updating the metastore URI so that 
it connects in embedded mode.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajkrrsingh/hive HIVE-20172

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/400.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #400


commit 3efc2d9ba96822101b30c645d746849e772e478c
Author: Rajkumar singh 
Date:   2018-07-13T21:17:40Z

HIVE-20172: StatsUpdater failed with GSS Exception while trying to connect 
to remote metastore




> StatsUpdater failed with GSS Exception while trying to connect to remote 
> metastore
> --
>
> Key: HIVE-20172
> URL: https://issues.apache.org/jira/browse/HIVE-20172
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.1
> Environment: Hive-1.2.1,Hive2.1,java8
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20172.patch
>
>
> StatsUpdater task failed with GSS Exception while trying to connect to remote 
> Metastore.
> {code}
> org.apache.thrift.transport.TTransportException: GSS initiate failed 
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>  
> at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) 
> at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:487)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:92)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3526) 
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3558) 
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:533) 
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Worker$StatsUpdater.gatherStats(Worker.java:300)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:265) 
> at org.apache.hadoop.hive.ql.txn.compactor.Worker$1.run(Worker.java:177) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174) 
> ) 
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:534)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hiv

[jira] [Updated] (HIVE-20172) StatsUpdater failed with GSS Exception while trying to connect to remote metastore

2018-07-13 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-20172:
--
Labels: pull-request-available  (was: )

> StatsUpdater failed with GSS Exception while trying to connect to remote 
> metastore
> --
>
> Key: HIVE-20172
> URL: https://issues.apache.org/jira/browse/HIVE-20172
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.1
> Environment: Hive-1.2.1,Hive2.1,java8
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20172.patch
>
>
> StatsUpdater task failed with GSS Exception while trying to connect to remote 
> Metastore.
> {code}
> org.apache.thrift.transport.TTransportException: GSS initiate failed 
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>  
> at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) 
> at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:487)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:92)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3526) 
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3558) 
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:533) 
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Worker$StatsUpdater.gatherStats(Worker.java:300)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:265) 
> at org.apache.hadoop.hive.ql.txn.compactor.Worker$1.run(Worker.java:177) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174) 
> ) 
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:534)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> {code}
> since metastore client is running in HMS so there is no need to connect to 
> remote URI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543704#comment-16543704
 ] 

Hive QA commented on HIVE-19820:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931431/HIVE-19820.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12592/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12592/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12592/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12931431/HIVE-19820.03.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931431 - PreCommit-HIVE-Build

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.01.patch, HIVE-19820.02-master-txnstats.patch, 
> HIVE-19820.03-master-txnstats.patch, HIVE-19820.03.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543701#comment-16543701
 ] 

Hive QA commented on HIVE-17593:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931423/HIVE-17593.4.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14650 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.exec.vector.expressions.TestVectorStringExpressions.testStringLength
 (batchId=300)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12591/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12591/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12591/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931423 - PreCommit-HIVE-Build

> DataWritableWriter strip spaces for CHAR type before writing, but predicate 
> generator doesn't do same thing.
> 
>
> Key: HIVE-17593
> URL: https://issues.apache.org/jira/browse/HIVE-17593
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.0, 3.0.0
>Reporter: Junjie Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, 
> HIVE-17593.4.patch, HIVE-17593.patch
>
>
> DataWritableWriter strip spaces for CHAR type before writing. While when 
> generating predicate, it does NOT do same striping which should cause data 
> missing!
> In current version, it doesn't cause data missing since predicate is not well 
> push down to parquet due to HIVE-17261.
> Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as 
> same which will build a predicate with tail spaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20172) StatsUpdater failed with GSS Exception while trying to connect to remote metastore

2018-07-13 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-20172:
--
Component/s: (was: Hive)
 Transactions

> StatsUpdater failed with GSS Exception while trying to connect to remote 
> metastore
> --
>
> Key: HIVE-20172
> URL: https://issues.apache.org/jira/browse/HIVE-20172
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.1
> Environment: Hive-1.2.1,Hive2.1,java8
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-20172.patch
>
>
> StatsUpdater task failed with GSS Exception while trying to connect to remote 
> Metastore.
> {code}
> org.apache.thrift.transport.TTransportException: GSS initiate failed 
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>  
> at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) 
> at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:487)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:92)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3526) 
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3558) 
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:533) 
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Worker$StatsUpdater.gatherStats(Worker.java:300)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:265) 
> at org.apache.hadoop.hive.ql.txn.compactor.Worker$1.run(Worker.java:177) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174) 
> ) 
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:534)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> {code}
> since metastore client is running in HMS so there is no need to connect to 
> remote URI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20153) Count and Sum UDF consume more memory in Hive 2+

2018-07-13 Thread Aihua Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543672#comment-16543672
 ] 

Aihua Xu commented on HIVE-20153:
-

Yes. I'm able to download it. 

> Count and Sum UDF consume more memory in Hive 2+
> 
>
> Key: HIVE-20153
> URL: https://issues.apache.org/jira/browse/HIVE-20153
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 2.3.2
>Reporter: Szehon Ho
>Assignee: Aihua Xu
>Priority: Major
> Attachments: Screen Shot 2018-07-12 at 6.41.28 PM.png
>
>
> While playing with Hive2, we noticed that queries with a lot of count() and 
> sum() aggregations run out of memory on Hadoop side where they worked before 
> in Hive1. 
> In many queries, we have to double the Mapper Memory settings (in our 
> particular case mapreduce.map.java.opts from -Xmx2000M to -Xmx4000M), it 
> makes it not so easy to upgrade to Hive 2.
> Taking heap dump, we see one of the main culprit is the field 'uniqueObjects' 
> in GeneraicUDAFSum and GenericUDAFCount, which was added to support Window 
> functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20172) StatsUpdater failed with GSS Exception while trying to connect to remote metastore

2018-07-13 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543668#comment-16543668
 ] 

Ashutosh Chauhan commented on HIVE-20172:
-

+1

> StatsUpdater failed with GSS Exception while trying to connect to remote 
> metastore
> --
>
> Key: HIVE-20172
> URL: https://issues.apache.org/jira/browse/HIVE-20172
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.1
> Environment: Hive-1.2.1,Hive2.1,java8
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-20172.patch
>
>
> StatsUpdater task failed with GSS Exception while trying to connect to remote 
> Metastore.
> {code}
> org.apache.thrift.transport.TTransportException: GSS initiate failed 
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>  
> at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) 
> at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:487)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:92)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3526) 
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3558) 
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:533) 
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Worker$StatsUpdater.gatherStats(Worker.java:300)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:265) 
> at org.apache.hadoop.hive.ql.txn.compactor.Worker$1.run(Worker.java:177) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174) 
> ) 
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:534)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> {code}
> since metastore client is running in HMS so there is no need to connect to 
> remote URI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19940) Push predicates with deterministic UDFs with RBO

2018-07-13 Thread Naveen Gangam (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543658#comment-16543658
 ] 

Naveen Gangam commented on HIVE-19940:
--

Thanks [~janulatha]. I will wait for the pre-commits to run. Thanks

> Push predicates with deterministic UDFs with RBO
> 
>
> Key: HIVE-19940
> URL: https://issues.apache.org/jira/browse/HIVE-19940
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19940.1.patch, HIVE-19940.2.patch, 
> HIVE-19940.3.patch, HIVE-19940.4.patch, HIVE-19940.5.patch
>
>
> With RBO, predicates with any UDF doesn't get pushed down.  It makes sense to 
> not pushdown the predicates with non-deterministic function as the meaning of 
> the query changes after the predicate is resolved to use the function.  But 
> pushing a deterministic function is beneficial.
> Test Case:
> {code}
> set hive.cbo.enable=false;
> CREATE TABLE `testb`(
>`cola` string COMMENT '',
>`colb` string COMMENT '',
>`colc` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> CREATE TABLE `testa`(
>`col1` string COMMENT '',
>`col2` string COMMENT '',
>`col3` string COMMENT '',
>`col4` string COMMENT '',
>`col5` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> insert into testA partition (part1='US', part2='ABC', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='UK', part2='DEF', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='US', part2='DEF', part3='200')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='CA', part2='ABC', part3='300')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testB partition (part1='CA', part2='ABC', part3='300')
> values ('600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='CA', part2='ABC', part3='400')
> values ( '600', '700', 'abc'), ( '601', '701', 'abcd');
> insert into testB partition (part1='UK', part2='PQR', part3='500')
> values ('600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='US', part2='DEF', part3='200')
> values ( '600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='US', part2='PQR', part3='123')
> values ( '600', '700', 'abc'), ('601', '701', 'abcd');
> -- views with deterministic functions
> create view viewDeterministicUDFA partitioned on (vpart1, vpart2, vpart3) as 
> select
>  cast(col1 as decimal(38,18)) as vcol1,
>  cast(col2 as decimal(38,18)) as vcol2,
>  cast(col3 as decimal(38,18)) as vcol3,
>  cast(col4 as decimal(38,18)) as vcol4,
>  cast(col5 as char(10)) as vcol5,
>  cast(part1 as char(2)) as vpart1,
>  cast(part2 as char(3)) as vpart2,
>  cast(part3 as char(3)) as vpart3
>  from testa
> where part1 in ('US', 'CA');
> create view viewDeterministicUDFB partitioned on (vpart1, vpart2, vpart3) as 
> select
>  cast(cola as decimal(38,18)) as vcolA,
>  cast(colb as decimal(38,18)) as vcolB,
>  cast(colc as char(10)) as vcolC,
>  cast(part1 as char(2)) as vpart1,
>  cast(part2 as char(3)) as vpart2,
>  cast(part3 as char(3)) as vpart3
>  from testb
> where part1 in ('US', 'CA');
> explain
> select vcol1, vcol2, vcol3, vcola, vcolb
> from viewDeterministicUDFA a inner join viewDeterministicUDFB b
> on a.vpart1 = b.vpart1
> and a.vpart2 = b.vpart2
> and a.vpart3 = b.vpart3
> and a.vpart1 = 'US'
> and a.vpart2 = 'DEF'
> and a.vpart3 = '200';
> {code}
> Plan where the CAST is not pushed down.
> {code}
> STAGE PLANS:
>   Stage: Stage-1
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: testa
> filterExpr: (part1) IN ('US', 'CA') (type: boolean)
> Statistics: Num rows: 6 Data size: 13740 Basic stats: COMPLETE 
> Column stats: NONE
> Select Operator
>   expressions: CAST( col1 AS decimal(38,18)) (type: 
> decimal(38,18)), CAST( col2 AS decimal(38,18)) (type: decimal(38,18)), CAST( 
> col3 AS decimal(38,18)) (type: decimal(38,18)), CAST( part1 AS CHAR(2)) 
> (type: char(2)), CAST( part2 AS CHAR(3)) (type: char(3)), CAST( part3 AS 
> CHAR(3)) (type: char(3))
>   outputColumnNames: _col0, _col1, _col2, _col5, _col6, _col7
>   Statistics: Num rows: 6 Data size: 13740 Basic stats: COMPLETE 
> Column stats: NONE
>   Filter Operator
> 

[jira] [Commented] (HIVE-19668) Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and duplicate strings

2018-07-13 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543655#comment-16543655
 ] 

Vihang Karajgaonkar commented on HIVE-19668:


+1

> Over 30% of the heap wasted by duplicate org.antlr.runtime.CommonToken's and 
> duplicate strings
> --
>
> Key: HIVE-19668
> URL: https://issues.apache.org/jira/browse/HIVE-19668
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: HIVE-19668.01.patch, HIVE-19668.02.patch, 
> HIVE-19668.03.patch, image-2018-05-22-17-41-39-572.png
>
>
> I've recently analyzed a HS2 heap dump, obtained when there was a huge memory 
> spike during compilation of some big query. The analysis was done with jxray 
> ([www.jxray.com).|http://www.jxray.com)./] It turns out that more than 90% of 
> the 20G heap was used by data structures associated with query parsing 
> ({{org.apache.hadoop.hive.ql.parse.QBExpr}}). There are probably multiple 
> opportunities for optimizations here. One of them is to stop the code from 
> creating duplicate instances of {{org.antlr.runtime.CommonToken}} class. See 
> a sample of these objects in the attached image:
> !image-2018-05-22-17-41-39-572.png|width=879,height=399!
> Looks like these particular {{CommonToken}} objects are constants, that don't 
> change once created. I see some code, e.g. in 
> {{org.apache.hadoop.hive.ql.parse.CalcitePlanner}}, where such objects are 
> apparently repeatedly created with e.g. {{new 
> CommonToken(HiveParser.TOK_INSERT, "TOK_INSERT")}} If these 33 token kinds 
> are instead created once and reused, we will save more than 1/10th of the 
> heap in this scenario. Plus, since these objects are small but very numerous, 
> getting rid of them will remove a gread deal of pressure from the GC.
> Another source of waste are duplicate strings, that collectively waste 26.1% 
> of memory. Some of them come from CommonToken objects that have the same text 
> (i.e. for multiple CommonToken objects the contents of their 'text' Strings 
> are the same, but each has its own copy of that String). Other duplicate 
> strings come from other sources, that are easy enough to fix by adding 
> String.intern() calls.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17593) DataWritableWriter strip spaces for CHAR type before writing, but predicate generator doesn't do same thing.

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543650#comment-16543650
 ] 

Hive QA commented on HIVE-17593:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 2289 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} ql: The patch generated 1 new + 58 unchanged - 1 fixed 
= 59 total (was 59) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12591/dev-support/hive-personality.sh
 |
| git revision | master / d8306cf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12591/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12591/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> DataWritableWriter strip spaces for CHAR type before writing, but predicate 
> generator doesn't do same thing.
> 
>
> Key: HIVE-17593
> URL: https://issues.apache.org/jira/browse/HIVE-17593
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.3.0, 3.0.0
>Reporter: Junjie Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17593.2.patch, HIVE-17593.3.patch, 
> HIVE-17593.4.patch, HIVE-17593.patch
>
>
> DataWritableWriter strip spaces for CHAR type before writing. While when 
> generating predicate, it does NOT do same striping which should cause data 
> missing!
> In current version, it doesn't cause data missing since predicate is not well 
> push down to parquet due to HIVE-17261.
> Please see ConvertAstTosearchArg.java, getTypes treats CHAR and STRING as 
> same which will build a predicate with tail spaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20147) Hive streaming ingest is contented on synchronized logging

2018-07-13 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20147:
-
   Resolution: Fixed
Fix Version/s: 3.2.0
   4.0.0
   Status: Resolved  (was: Patch Available)

Committed to master and branch-3. Thanks for the review!

> Hive streaming ingest is contented on synchronized logging
> --
>
> Key: HIVE-20147
> URL: https://issues.apache.org/jira/browse/HIVE-20147
> Project: Hive
>  Issue Type: Bug
>  Components: Streaming, Transactions
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20147.1.patch, Screen Shot 2018-07-11 at 4.17.27 
> PM.png, sync-logger-contention.svg
>
>
> In one of the observed profile, >30% time spent on synchronized logging. See 
> attachment. 
> We should use async logging for hive streaming ingest by default.  !Screen 
> Shot 2018-07-11 at 4.17.27 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20032) Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543633#comment-16543633
 ] 

Hive QA commented on HIVE-20032:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931417/HIVE-20032.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 14650 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.exec.spark.TestSparkStatistics.testSparkStatistics 
(batchId=241)
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testSparkQuery (batchId=252)
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.testTempTable (batchId=252)
org.apache.hive.jdbc.TestJdbcWithMiniHS2ErasureCoding.testDescribeErasureCoding 
(batchId=250)
org.apache.hive.jdbc.TestJdbcWithMiniHS2ErasureCoding.testExplainErasureCoding 
(batchId=250)
org.apache.hive.jdbc.TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery 
(batchId=252)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12590/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12590/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12590/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12931417 - PreCommit-HIVE-Build

> Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled
> -
>
> Key: HIVE-20032
> URL: https://issues.apache.org/jira/browse/HIVE-20032
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-20032.1.patch, HIVE-20032.2.patch, 
> HIVE-20032.3.patch, HIVE-20032.4.patch
>
>
> Follow up on HIVE-15104, if we don't enable RDD cacheing or groupByShuffles, 
> then we don't need to serialize the hashCode when shuffling data in HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20172) StatsUpdater failed with GSS Exception while trying to connect to remote metastore

2018-07-13 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-20172:
--
Attachment: HIVE-20172.patch
Status: Patch Available  (was: In Progress)

> StatsUpdater failed with GSS Exception while trying to connect to remote 
> metastore
> --
>
> Key: HIVE-20172
> URL: https://issues.apache.org/jira/browse/HIVE-20172
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.1
> Environment: Hive-1.2.1,Hive2.1,java8
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-20172.patch
>
>
> StatsUpdater task failed with GSS Exception while trying to connect to remote 
> Metastore.
> {code}
> org.apache.thrift.transport.TTransportException: GSS initiate failed 
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>  
> at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) 
> at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:487)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:92)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3526) 
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3558) 
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:533) 
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Worker$StatsUpdater.gatherStats(Worker.java:300)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:265) 
> at org.apache.hadoop.hive.ql.txn.compactor.Worker$1.run(Worker.java:177) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174) 
> ) 
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:534)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> {code}
> since metastore client is running in HMS so there is no need to connect to 
> remote URI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-20172) StatsUpdater failed with GSS Exception while trying to connect to remote metastore

2018-07-13 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-20172 started by Rajkumar Singh.
-
> StatsUpdater failed with GSS Exception while trying to connect to remote 
> metastore
> --
>
> Key: HIVE-20172
> URL: https://issues.apache.org/jira/browse/HIVE-20172
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.1
> Environment: Hive-1.2.1,Hive2.1,java8
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
>
> StatsUpdater task failed with GSS Exception while trying to connect to remote 
> Metastore.
> {code}
> org.apache.thrift.transport.TTransportException: GSS initiate failed 
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>  
> at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) 
> at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:487)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:92)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3526) 
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3558) 
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:533) 
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Worker$StatsUpdater.gatherStats(Worker.java:300)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:265) 
> at org.apache.hadoop.hive.ql.txn.compactor.Worker$1.run(Worker.java:177) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174) 
> ) 
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:534)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> {code}
> since metastore client is running in HMS so there is no need to connect to 
> remote URI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20172) StatsUpdater failed with GSS Exception while trying to connect to remote metastore

2018-07-13 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh reassigned HIVE-20172:
-


> StatsUpdater failed with GSS Exception while trying to connect to remote 
> metastore
> --
>
> Key: HIVE-20172
> URL: https://issues.apache.org/jira/browse/HIVE-20172
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.1.1
> Environment: Hive-1.2.1,Hive2.1,java8
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
>
> StatsUpdater task failed with GSS Exception while trying to connect to remote 
> Metastore.
> {code}
> org.apache.thrift.transport.TTransportException: GSS initiate failed 
> at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>  
> at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) 
> at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>  
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:487)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:92)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
>  
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3526) 
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3558) 
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:533) 
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Worker$StatsUpdater.gatherStats(Worker.java:300)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:265) 
> at org.apache.hadoop.hive.ql.txn.compactor.Worker$1.run(Worker.java:177) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:422) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>  
> at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:174) 
> ) 
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:534)
>  
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:282)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:76)
>  
> {code}
> since metastore client is running in HMS so there is no need to connect to 
> remote URI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20032) Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543615#comment-16543615
 ] 

Hive QA commented on HIVE-20032:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
0s{color} | {color:blue} common in master has 64 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  5m 
57s{color} | {color:blue} ql in master has 2289 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 1 new + 12 unchanged - 0 fixed 
= 13 total (was 12) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12590/dev-support/hive-personality.sh
 |
| git revision | master / d8306cf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12590/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql kryo-registrator U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12590/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled
> -
>
> Key: HIVE-20032
> URL: https://issues.apache.org/jira/browse/HIVE-20032
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-20032.1.patch, HIVE-20032.2.patch, 
> HIVE-20032.3.patch, HIVE-20032.4.patch
>
>
> Follow up on HIVE-15104, if we don't enable RDD cacheing or groupByShuffles, 
> then we don't need to serialize the hashCode when shuffling data in HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543612#comment-16543612
 ] 

Sergey Shelukhin commented on HIVE-19820:
-

There are two problems with aggregation - first write ID is not being passed 
in, but more importantly merge needs different handling than the standard path 
- if we fail to get stats it should set stats to invalid instead of blindly 
writing unmerged stats. So, the check cannot just be "get valid stats", it 
should "get any stats", then check validity.

> add ACID stats support to background stats updater and fix bunch of edge 
> cases found in SU tests
> 
>
> Key: HIVE-19820
> URL: https://issues.apache.org/jira/browse/HIVE-19820
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19820.01-master-txnstats.patch, 
> HIVE-19820.01.patch, HIVE-19820.02-master-txnstats.patch, 
> HIVE-19820.03-master-txnstats.patch, HIVE-19820.03.patch, 
> HIVE-19820.04-master-txnstats.patch, HIVE-19820.patch, 
> branch-19820.02.nogen.patch, branch-19820.03.nogen.patch, 
> branch-19820.nogen.patch, branch-19820.nogen.patch
>
>
> Follow-up from HIVE-19418.
> Right now it checks whether stats are valid in an old-fashioned way... and 
> also gets ACID state, and discards it without using.
> When ACID stats are implemented, ACID state needs to be used to do 
> version-aware valid stats checks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543609#comment-16543609
 ] 

Hive QA commented on HIVE-19820:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} storage-api in master has 48 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m  
5s{color} | {color:blue} standalone-metastore/metastore-common in master has 
217 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 2289 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
20s{color} | {color:red} storage-api: The patch generated 1 new + 3 unchanged - 
0 fixed = 4 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
55s{color} | {color:red} ql: The patch generated 42 new + 2390 unchanged - 18 
fixed = 2432 total (was 2408) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
56s{color} | {color:red} root: The patch generated 1 new + 3 unchanged - 0 
fixed = 4 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} itests/hcatalog-unit: The patch generated 5 new + 27 
unchanged - 1 fixed = 32 total (was 28) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch has 407 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} patch/storage-api cannot run setBugDatabaseInfo from 
findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  6m 
52s{color} | {color:red} patch/standalone-metastore/metastore-common cannot run 
setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
53s{color} | {color:red} patch/ql cannot run setBugDatabaseInfo from findbugs 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
47s{color} | {color:red} patch/itests/hive-unit cannot run setBugDatabaseInfo 
from findbugs {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
8s{color} | {color:red} standalone-metastore generated 6 new + 54 unchanged - 0 
fixed = 60 total (was 54) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
36s{color} | {color:red} standalone-metastore_metastore-common generated 6 new 
+ 54 unchanged - 0 fixed = 60 total (was 54) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 10m 
20s{color} | {color:red} root generated 6 new + 371 unchanged - 0 fixed = 377 
total (was 371) {color} |

[jira] [Commented] (HIVE-19940) Push predicates with deterministic UDFs with RBO

2018-07-13 Thread Janaki Lahorani (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543592#comment-16543592
 ] 

Janaki Lahorani commented on HIVE-19940:


Thanks [~ngangam].  The golden files had to be updated.  I have uploaded a 
patch.

> Push predicates with deterministic UDFs with RBO
> 
>
> Key: HIVE-19940
> URL: https://issues.apache.org/jira/browse/HIVE-19940
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19940.1.patch, HIVE-19940.2.patch, 
> HIVE-19940.3.patch, HIVE-19940.4.patch, HIVE-19940.5.patch
>
>
> With RBO, predicates with any UDF doesn't get pushed down.  It makes sense to 
> not pushdown the predicates with non-deterministic function as the meaning of 
> the query changes after the predicate is resolved to use the function.  But 
> pushing a deterministic function is beneficial.
> Test Case:
> {code}
> set hive.cbo.enable=false;
> CREATE TABLE `testb`(
>`cola` string COMMENT '',
>`colb` string COMMENT '',
>`colc` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> CREATE TABLE `testa`(
>`col1` string COMMENT '',
>`col2` string COMMENT '',
>`col3` string COMMENT '',
>`col4` string COMMENT '',
>`col5` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> insert into testA partition (part1='US', part2='ABC', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='UK', part2='DEF', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='US', part2='DEF', part3='200')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='CA', part2='ABC', part3='300')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testB partition (part1='CA', part2='ABC', part3='300')
> values ('600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='CA', part2='ABC', part3='400')
> values ( '600', '700', 'abc'), ( '601', '701', 'abcd');
> insert into testB partition (part1='UK', part2='PQR', part3='500')
> values ('600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='US', part2='DEF', part3='200')
> values ( '600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='US', part2='PQR', part3='123')
> values ( '600', '700', 'abc'), ('601', '701', 'abcd');
> -- views with deterministic functions
> create view viewDeterministicUDFA partitioned on (vpart1, vpart2, vpart3) as 
> select
>  cast(col1 as decimal(38,18)) as vcol1,
>  cast(col2 as decimal(38,18)) as vcol2,
>  cast(col3 as decimal(38,18)) as vcol3,
>  cast(col4 as decimal(38,18)) as vcol4,
>  cast(col5 as char(10)) as vcol5,
>  cast(part1 as char(2)) as vpart1,
>  cast(part2 as char(3)) as vpart2,
>  cast(part3 as char(3)) as vpart3
>  from testa
> where part1 in ('US', 'CA');
> create view viewDeterministicUDFB partitioned on (vpart1, vpart2, vpart3) as 
> select
>  cast(cola as decimal(38,18)) as vcolA,
>  cast(colb as decimal(38,18)) as vcolB,
>  cast(colc as char(10)) as vcolC,
>  cast(part1 as char(2)) as vpart1,
>  cast(part2 as char(3)) as vpart2,
>  cast(part3 as char(3)) as vpart3
>  from testb
> where part1 in ('US', 'CA');
> explain
> select vcol1, vcol2, vcol3, vcola, vcolb
> from viewDeterministicUDFA a inner join viewDeterministicUDFB b
> on a.vpart1 = b.vpart1
> and a.vpart2 = b.vpart2
> and a.vpart3 = b.vpart3
> and a.vpart1 = 'US'
> and a.vpart2 = 'DEF'
> and a.vpart3 = '200';
> {code}
> Plan where the CAST is not pushed down.
> {code}
> STAGE PLANS:
>   Stage: Stage-1
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: testa
> filterExpr: (part1) IN ('US', 'CA') (type: boolean)
> Statistics: Num rows: 6 Data size: 13740 Basic stats: COMPLETE 
> Column stats: NONE
> Select Operator
>   expressions: CAST( col1 AS decimal(38,18)) (type: 
> decimal(38,18)), CAST( col2 AS decimal(38,18)) (type: decimal(38,18)), CAST( 
> col3 AS decimal(38,18)) (type: decimal(38,18)), CAST( part1 AS CHAR(2)) 
> (type: char(2)), CAST( part2 AS CHAR(3)) (type: char(3)), CAST( part3 AS 
> CHAR(3)) (type: char(3))
>   outputColumnNames: _col0, _col1, _col2, _col5, _col6, _col7
>   Statistics: Num rows: 6 Data size: 13740 Basic stats: COMPLETE 
> Column stats: NONE
>   Filter Oper

[jira] [Updated] (HIVE-19940) Push predicates with deterministic UDFs with RBO

2018-07-13 Thread Janaki Lahorani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-19940:
---
Attachment: HIVE-19940.5.patch

> Push predicates with deterministic UDFs with RBO
> 
>
> Key: HIVE-19940
> URL: https://issues.apache.org/jira/browse/HIVE-19940
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19940.1.patch, HIVE-19940.2.patch, 
> HIVE-19940.3.patch, HIVE-19940.4.patch, HIVE-19940.5.patch
>
>
> With RBO, predicates with any UDF doesn't get pushed down.  It makes sense to 
> not pushdown the predicates with non-deterministic function as the meaning of 
> the query changes after the predicate is resolved to use the function.  But 
> pushing a deterministic function is beneficial.
> Test Case:
> {code}
> set hive.cbo.enable=false;
> CREATE TABLE `testb`(
>`cola` string COMMENT '',
>`colb` string COMMENT '',
>`colc` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> CREATE TABLE `testa`(
>`col1` string COMMENT '',
>`col2` string COMMENT '',
>`col3` string COMMENT '',
>`col4` string COMMENT '',
>`col5` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> insert into testA partition (part1='US', part2='ABC', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='UK', part2='DEF', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='US', part2='DEF', part3='200')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='CA', part2='ABC', part3='300')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testB partition (part1='CA', part2='ABC', part3='300')
> values ('600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='CA', part2='ABC', part3='400')
> values ( '600', '700', 'abc'), ( '601', '701', 'abcd');
> insert into testB partition (part1='UK', part2='PQR', part3='500')
> values ('600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='US', part2='DEF', part3='200')
> values ( '600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='US', part2='PQR', part3='123')
> values ( '600', '700', 'abc'), ('601', '701', 'abcd');
> -- views with deterministic functions
> create view viewDeterministicUDFA partitioned on (vpart1, vpart2, vpart3) as 
> select
>  cast(col1 as decimal(38,18)) as vcol1,
>  cast(col2 as decimal(38,18)) as vcol2,
>  cast(col3 as decimal(38,18)) as vcol3,
>  cast(col4 as decimal(38,18)) as vcol4,
>  cast(col5 as char(10)) as vcol5,
>  cast(part1 as char(2)) as vpart1,
>  cast(part2 as char(3)) as vpart2,
>  cast(part3 as char(3)) as vpart3
>  from testa
> where part1 in ('US', 'CA');
> create view viewDeterministicUDFB partitioned on (vpart1, vpart2, vpart3) as 
> select
>  cast(cola as decimal(38,18)) as vcolA,
>  cast(colb as decimal(38,18)) as vcolB,
>  cast(colc as char(10)) as vcolC,
>  cast(part1 as char(2)) as vpart1,
>  cast(part2 as char(3)) as vpart2,
>  cast(part3 as char(3)) as vpart3
>  from testb
> where part1 in ('US', 'CA');
> explain
> select vcol1, vcol2, vcol3, vcola, vcolb
> from viewDeterministicUDFA a inner join viewDeterministicUDFB b
> on a.vpart1 = b.vpart1
> and a.vpart2 = b.vpart2
> and a.vpart3 = b.vpart3
> and a.vpart1 = 'US'
> and a.vpart2 = 'DEF'
> and a.vpart3 = '200';
> {code}
> Plan where the CAST is not pushed down.
> {code}
> STAGE PLANS:
>   Stage: Stage-1
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: testa
> filterExpr: (part1) IN ('US', 'CA') (type: boolean)
> Statistics: Num rows: 6 Data size: 13740 Basic stats: COMPLETE 
> Column stats: NONE
> Select Operator
>   expressions: CAST( col1 AS decimal(38,18)) (type: 
> decimal(38,18)), CAST( col2 AS decimal(38,18)) (type: decimal(38,18)), CAST( 
> col3 AS decimal(38,18)) (type: decimal(38,18)), CAST( part1 AS CHAR(2)) 
> (type: char(2)), CAST( part2 AS CHAR(3)) (type: char(3)), CAST( part3 AS 
> CHAR(3)) (type: char(3))
>   outputColumnNames: _col0, _col1, _col2, _col5, _col6, _col7
>   Statistics: Num rows: 6 Data size: 13740 Basic stats: COMPLETE 
> Column stats: NONE
>   Filter Operator
> predicate: ((_col5 = 'US') and (_col6 = 'DEF') and (_col7 = 
> '200')) (type: boole

[jira] [Commented] (HIVE-20116) TezTask is using parent logger

2018-07-13 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543586#comment-16543586
 ] 

Sergey Shelukhin commented on HIVE-20116:
-

yes

> TezTask is using parent logger
> --
>
> Key: HIVE-20116
> URL: https://issues.apache.org/jira/browse/HIVE-20116
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20116.1.patch, HIVE-20116.2.patch, 
> HIVE-20116.3.patch
>
>
> TezTask is using parent's logger (Task). It should instead use its own class 
> name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20147) Hive streaming ingest is contented on synchronized logging

2018-07-13 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543588#comment-16543588
 ] 

Sergey Shelukhin commented on HIVE-20147:
-

+1

> Hive streaming ingest is contented on synchronized logging
> --
>
> Key: HIVE-20147
> URL: https://issues.apache.org/jira/browse/HIVE-20147
> Project: Hive
>  Issue Type: Bug
>  Components: Streaming, Transactions
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20147.1.patch, Screen Shot 2018-07-11 at 4.17.27 
> PM.png, sync-logger-contention.svg
>
>
> In one of the observed profile, >30% time spent on synchronized logging. See 
> attachment. 
> We should use async logging for hive streaming ingest by default.  !Screen 
> Shot 2018-07-11 at 4.17.27 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20136) Code Review of ArchiveUtils Class

2018-07-13 Thread Aihua Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543585#comment-16543585
 ] 

Aihua Xu commented on HIVE-20136:
-

LGTM. +1.

> Code Review of ArchiveUtils Class
> -
>
> Key: HIVE-20136
> URL: https://issues.apache.org/jira/browse/HIVE-20136
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20136.1.patch
>
>
> General code review of {{ArchiveUtil}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-20116) TezTask is using parent logger

2018-07-13 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543584#comment-16543584
 ] 

Prasanth Jayachandran edited comment on HIVE-20116 at 7/13/18 6:42 PM:
---

[~sershe] did you mean the +1 for the other logging patch :)  HIVE-20147 ?


was (Author: prasanth_j):
[~sershe] did you mean the +1 for the other logging patch :) ?

> TezTask is using parent logger
> --
>
> Key: HIVE-20116
> URL: https://issues.apache.org/jira/browse/HIVE-20116
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20116.1.patch, HIVE-20116.2.patch, 
> HIVE-20116.3.patch
>
>
> TezTask is using parent's logger (Task). It should instead use its own class 
> name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20116) TezTask is using parent logger

2018-07-13 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543584#comment-16543584
 ] 

Prasanth Jayachandran commented on HIVE-20116:
--

[~sershe] did you mean the +1 for the other logging patch :) ?

> TezTask is using parent logger
> --
>
> Key: HIVE-20116
> URL: https://issues.apache.org/jira/browse/HIVE-20116
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20116.1.patch, HIVE-20116.2.patch, 
> HIVE-20116.3.patch
>
>
> TezTask is using parent's logger (Task). It should instead use its own class 
> name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20116) TezTask is using parent logger

2018-07-13 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543578#comment-16543578
 ] 

Sergey Shelukhin commented on HIVE-20116:
-

+1

> TezTask is using parent logger
> --
>
> Key: HIVE-20116
> URL: https://issues.apache.org/jira/browse/HIVE-20116
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20116.1.patch, HIVE-20116.2.patch, 
> HIVE-20116.3.patch
>
>
> TezTask is using parent's logger (Task). It should instead use its own class 
> name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19820) add ACID stats support to background stats updater and fix bunch of edge cases found in SU tests

2018-07-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543576#comment-16543576
 ] 

Hive QA commented on HIVE-19820:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12931431/HIVE-19820.03.patch

{color:green}SUCCESS:{color} +1 due to 24 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 53 failed/errored test(s), 14660 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_10] 
(batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_2] 
(batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_analyze_decimal_compare]
 (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part2] (batchId=21)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original]
 (batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[autoColumnStats_10]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[autoColumnStats_2]
 (batchId=178)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[change_allowincompatible_vectorization_false_date]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_into_default_keyword]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid2] 
(batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_decimal64_reader]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_time_window]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_llap] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_truncate]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_nonvec_table]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_nonvec_table_llap_io]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_vec_table]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_vec_table_llap_io]
 (batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_nonvec_table]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_nonvec_table_llap_io]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_table]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_table_llap_io]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vecrow_table]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vecrow_table_llap_io]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge_stats]
 (batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_multi]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_scalar]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_adaptor_usage_mode]
 (batchId=176)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_char_2]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_coalesce_2]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_coalesce_3]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_data_type

[jira] [Updated] (HIVE-20116) TezTask is using parent logger

2018-07-13 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20116:
-
Attachment: HIVE-20116.3.patch

> TezTask is using parent logger
> --
>
> Key: HIVE-20116
> URL: https://issues.apache.org/jira/browse/HIVE-20116
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20116.1.patch, HIVE-20116.2.patch, 
> HIVE-20116.3.patch
>
>
> TezTask is using parent's logger (Task). It should instead use its own class 
> name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20147) Hive streaming ingest is contented on synchronized logging

2018-07-13 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543566#comment-16543566
 ] 

Prasanth Jayachandran commented on HIVE-20147:
--

[~gopalv] can you please review this? This just updates log level from INFO to 
DEBUG. 

> Hive streaming ingest is contented on synchronized logging
> --
>
> Key: HIVE-20147
> URL: https://issues.apache.org/jira/browse/HIVE-20147
> Project: Hive
>  Issue Type: Bug
>  Components: Streaming, Transactions
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20147.1.patch, Screen Shot 2018-07-11 at 4.17.27 
> PM.png, sync-logger-contention.svg
>
>
> In one of the observed profile, >30% time spent on synchronized logging. See 
> attachment. 
> We should use async logging for hive streaming ingest by default.  !Screen 
> Shot 2018-07-11 at 4.17.27 PM.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19834) Clear Context Map of Paths to ContentSummary

2018-07-13 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543553#comment-16543553
 ] 

BELUGA BEHR commented on HIVE-19834:


[~pvary] Please review.

> Clear Context Map of Paths to ContentSummary
> 
>
> Key: HIVE-19834
> URL: https://issues.apache.org/jira/browse/HIVE-19834
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.3.2, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-19834.1.patch, HIVE-19834.2.patch
>
>
> The {{Context}} class has a {{clear}} method which is called.  During the 
> method, various files are deleted and in-memory maps are cleared.  I would 
> like to propose that we clear out an additional in-memory map structure that 
> may contain a lot of data so that it can be GC'ed asap. This map contains 
> mapping of "File Path"->"Content Summary".  For a query with a large file 
> set, this can be quite large.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19846) Removed Deprecated Calls From FileUtils-getJarFilesByPath

2018-07-13 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543552#comment-16543552
 ] 

BELUGA BEHR commented on HIVE-19846:


[~pvary] Can you please review again? :)

> Removed Deprecated Calls From FileUtils-getJarFilesByPath
> -
>
> Key: HIVE-19846
> URL: https://issues.apache.org/jira/browse/HIVE-19846
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-19846.1.patch, HIVE-19846.2.patch, 
> HIVE-19846.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20163) Simplify StringSubstrColStart Initialization

2018-07-13 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20163:
---
Status: Patch Available  (was: Open)

> Simplify StringSubstrColStart Initialization
> 
>
> Key: HIVE-20163
> URL: https://issues.apache.org/jira/browse/HIVE-20163
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20163.1.patch, HIVE-20163.2.patch
>
>
> * Remove code
> * Remove exception handling
> * Remove {{printStackTrace}} call



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20163) Simplify StringSubstrColStart Initialization

2018-07-13 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20163:
---
Attachment: HIVE-20163.2.patch

> Simplify StringSubstrColStart Initialization
> 
>
> Key: HIVE-20163
> URL: https://issues.apache.org/jira/browse/HIVE-20163
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20163.1.patch, HIVE-20163.2.patch
>
>
> * Remove code
> * Remove exception handling
> * Remove {{printStackTrace}} call



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20163) Simplify StringSubstrColStart Initialization

2018-07-13 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20163:
---
Status: Open  (was: Patch Available)

> Simplify StringSubstrColStart Initialization
> 
>
> Key: HIVE-20163
> URL: https://issues.apache.org/jira/browse/HIVE-20163
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20163.1.patch
>
>
> * Remove code
> * Remove exception handling
> * Remove {{printStackTrace}} call



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20131) SQL Script changes for creating txn write notification in 3.2.0 files

2018-07-13 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543548#comment-16543548
 ] 

Vineet Garg commented on HIVE-20131:


[~maheshk114] Sorry for the late response. What branch is this targeted for? I 
see changes in 3.0-3.1 upgrade and 3.1.0 schema files which is not correct. We 
are about to release hive 3.1.0 so there should no more changes for 3.1.0.

I see changes in 3.1-to-3.2 upgrade and 3.2.0 schema so I assume this will go 
in branch-3. For that you'll need separate patch targeting changes for branch-3.


> SQL Script changes for creating  txn write notification in 3.2.0 files 
> ---
>
> Key: HIVE-20131
> URL: https://issues.apache.org/jira/browse/HIVE-20131
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Fix For: 4.0.0
>
> Attachments: HIVE-20131.01.patch
>
>
> 1. Change partition name size from 1024 to 767 . (mySQL 5.6 and before that 
> supports max 767 length keys)
>  2. Remove the create txn_write_notification_log table creation from 3.1.0 
> scripts and add a new scripts for 3.2.0
> 3. Remove the file 3.1.0-to-4.0.0 and instead add file for 3.2.0-to-4.0.0 and 
> 3.1.0-to-3.2.0
> 4. Change in metastore init schema  xml file to take 4.0.0 instead of 3.1.0 
> as current version.
> h1.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20136) Code Review of ArchiveUtils Class

2018-07-13 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16543545#comment-16543545
 ] 

BELUGA BEHR commented on HIVE-20136:


[~aihuaxu] Can you please review? :)

> Code Review of ArchiveUtils Class
> -
>
> Key: HIVE-20136
> URL: https://issues.apache.org/jira/browse/HIVE-20136
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20136.1.patch
>
>
> General code review of {{ArchiveUtil}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >