[jira] [Commented] (HIVE-20948) Eliminate file rename in compactor

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-20948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035100#comment-17035100
 ] 

Hive QA commented on HIVE-20948:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
57s{color} | {color:blue} ql in master has 1532 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20570/dev-support/hive-personality.sh
 |
| git revision | master / 8f46884 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20570/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Eliminate file rename in compactor
> --
>
> Key: HIVE-20948
> URL: https://issues.apache.org/jira/browse/HIVE-20948
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: László Pintér
>Priority: Major
> Attachments: HIVE-20948.01.patch, HIVE-20948.02.patch
>
>
> Once HIVE-20823 is committed, we should investigate if it's possible to have 
> compactor write directly to base_x_cZ or delta_x_y_cZ.  
> For query based compaction: can we control location of temp table dir?  We 
> support external temp tables so this may work but we'd need to have non-acid 
> insert create files with {{bucket_x}} names.
>  
> For MR/Tez/LLAP based (should this be done at all?), need to figure out how 
> retries of tasks will work.  Just like we currently generate an MR job to 
> compact, we should be able to generate a Tez job.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-16355) Service: embedded mode should only be available if service is loaded onto the classpath

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-16355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035087#comment-17035087
 ] 

Hive QA commented on HIVE-16355:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993118/HIVE-16355.06.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17990 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20569/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20569/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20569/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993118 - PreCommit-HIVE-Build

> Service: embedded mode should only be available if service is loaded onto the 
> classpath
> ---
>
> Key: HIVE-16355
> URL: https://issues.apache.org/jira/browse/HIVE-16355
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore, Server Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-16355.06.patch, HIVE-16355.1.patch, 
> HIVE-16355.2.patch, HIVE-16355.2.patch, HIVE-16355.3.patch, 
> HIVE-16355.4.patch, HIVE-16355.4.patch, HIVE-16355.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I would like to relax the hard reference to 
> {{EmbeddedThriftBinaryCLIService}} to be only used in case {{service}} module 
> is loaded onto the classpath.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22877) Fix decimal boundary check for casting to Decimal64

2020-02-11 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HIVE-22877:

Summary: Fix decimal boundary check for casting to Decimal64  (was: Wrong 
decimal boundary check for casting to Decimal64)

> Fix decimal boundary check for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22877.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `precision` is smaller than or equal to 18. Precision is 
> irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22877) Fix decimal boundary check for casting to Decimal64

2020-02-11 Thread Gopal Vijayaraghavan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035080#comment-17035080
 ] 

Gopal Vijayaraghavan commented on HIVE-22877:
-

+1 tests pending.

> Fix decimal boundary check for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22877.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `precision` is smaller than or equal to 18. Precision is 
> irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22877) Wrong decimal boundary check for casting to Decimal64

2020-02-11 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HIVE-22877:

Summary: Wrong decimal boundary check for casting to Decimal64  (was: Wrong 
decimal boundary for casting to Decimal64)

> Wrong decimal boundary check for casting to Decimal64
> -
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22877.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `precision` is smaller than or equal to 18. Precision is 
> irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22877) Wrong decimal boundary for casting to Decimal64

2020-02-11 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HIVE-22877:

Description: 
During vectorization, decimal fields that are obtained via generic udfs are 
cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
compares the source column's `scale + precision` to 18(maximum number of digits 
that can be represented by a long). A decimal can fit in a long as long as its 
`precision` is smaller than or equal to 18. Precision is irrelevant.

Since vectorized generic udf expression takes precision into account, it 
computes wrong output column vector: Decimal instead of Decimal64. This in turn 
causes ClassCastException down the operator chain.

Below query fails with class cast exception:

 
{code:java}
create table mini_store
(
 s_store_sk int,
 s_store_id string
)
row format delimited fields terminated by '\t'
STORED AS ORC;

create table mini_sales
(
 ss_store_sk int,
 ss_quantity int,
 ss_sales_price decimal(7,2)
)
row format delimited fields terminated by '\t'
STORED AS ORC;
insert into mini_store values (1, 'store');
insert into mini_sales values (1, 2, 1.2);
select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
from mini_sales, mini_store where ss_store_sk = s_store_sk
{code}
 

 

  was:
During vectorization, decimal fields that are obtained via generic udfs are 
cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
compares the source column's `scale + precision` to 18(maximum number of digits 
that can be represented by a long). A decimal can fit in a long as long as its 
`precision` is smaller than or equal to 18. Scale is irrelevant.

Since vectorized generic udf expression takes precision into account, it 
computes wrong output column vector: Decimal instead of Decimal64. This in turn 
causes ClassCastException down the operator chain.

Below query fails with class cast exception:

 
{code:java}
create table mini_store
(
 s_store_sk int,
 s_store_id string
)
row format delimited fields terminated by '\t'
STORED AS ORC;

create table mini_sales
(
 ss_store_sk int,
 ss_quantity int,
 ss_sales_price decimal(7,2)
)
row format delimited fields terminated by '\t'
STORED AS ORC;
insert into mini_store values (1, 'store');
insert into mini_sales values (1, 2, 1.2);
select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
from mini_sales, mini_store where ss_store_sk = s_store_sk
{code}
 

 


> Wrong decimal boundary for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22877.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `precision` is smaller than or equal to 18. Precision is 
> irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-16355) Service: embedded mode should only be available if service is loaded onto the classpath

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-16355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035076#comment-17035076
 ] 

Hive QA commented on HIVE-16355:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
35s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} service in master has 51 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} jdbc in master has 16 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} service: The patch generated 1 new + 23 unchanged - 2 
fixed = 24 total (was 25) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} jdbc: The patch generated 3 new + 28 unchanged - 7 
fixed = 31 total (was 35) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
10s{color} | {color:red} root: The patch generated 4 new + 51 unchanged - 9 
fixed = 55 total (was 60) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20569/dev-support/hive-personality.sh
 |
| git revision | master / 8f46884 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20569/yetus/diff-checkstyle-service.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20569/yetus/diff-checkstyle-jdbc.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20569/yetus/diff-checkstyle-root.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20569/yetus/patch-asflicense-problems.txt
 |
| modules | C: service jdbc . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20569/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Service: embedded mode should only be available if service is loaded onto the 
> classpath
> ---
>
> Key: HIVE-16355
> URL: 

[jira] [Updated] (HIVE-22877) Wrong decimal boundary for casting to Decimal64

2020-02-11 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22877:

Description: 
During vectorization, decimal fields that are obtained via generic udfs are 
cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
compares the source column's `scale + precision` to 18(maximum number of digits 
that can be represented by a long). A decimal can fit in a long as long as its 
`precision` is smaller than or equal to 18. Scale is irrelevant.

Since vectorized generic udf expression takes precision into account, it 
computes wrong output column vector: Decimal instead of Decimal64. This in turn 
causes ClassCastException down the operator chain.

Below query fails with class cast exception:

 
{code:java}
create table mini_store
(
 s_store_sk int,
 s_store_id string
)
row format delimited fields terminated by '\t'
STORED AS ORC;

create table mini_sales
(
 ss_store_sk int,
 ss_quantity int,
 ss_sales_price decimal(7,2)
)
row format delimited fields terminated by '\t'
STORED AS ORC;
insert into mini_store values (1, 'store');
insert into mini_sales values (1, 2, 1.2);
select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
from mini_sales, mini_store where ss_store_sk = s_store_sk
{code}
 

 

  was:
During vectorization, decimal fields that are obtained via generic udfs are 
cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
compares the source column's `scale + precision` to 18(maximum number of digits 
that can be represented by a long). A decimal can fit in a long as long as its 
`scale` is smaller than or equal to 18. Precision is irrelevant.

Since vectorized generic udf expression takes precision into account, it 
computes wrong output column vector: Decimal instead of Decimal64. This in turn 
causes ClassCastException down the operator chain.

Below query fails with class cast exception:

 
{code:java}
create table mini_store
(
 s_store_sk int,
 s_store_id string
)
row format delimited fields terminated by '\t'
STORED AS ORC;

create table mini_sales
(
 ss_store_sk int,
 ss_quantity int,
 ss_sales_price decimal(7,2)
)
row format delimited fields terminated by '\t'
STORED AS ORC;
insert into mini_store values (1, 'store');
insert into mini_sales values (1, 2, 1.2);
select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
from mini_sales, mini_store where ss_store_sk = s_store_sk
{code}
 

 


> Wrong decimal boundary for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22877.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `precision` is smaller than or equal to 18. Scale is irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22263) MV has distinct on columns and query has count(distinct) on one of the columns, we do not trigger rewriting

2020-02-11 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22263:
---
Attachment: HIVE-22263.patch

> MV has distinct on columns and query has count(distinct) on one of the 
> columns, we do not trigger rewriting
> ---
>
> Key: HIVE-22263
> URL: https://issues.apache.org/jira/browse/HIVE-22263
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Materialized views
>Affects Versions: 3.1.2
>Reporter: Steve Carlin
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: HIVE-22263.patch, count-distinct.sql, count-distinct2.sql
>
>
> Count distinct issues with materialized views.  Two scripts attached
> 1) 
> create materialized view base_aview stored as orc as select distinct c1 c1, 
> c2 c2 from base;
> explain extended select count(distinct c1) from base group by c2 ;
> 2)
> create materialized view base_aview stored as orc as SELECT c1 c1, c2 c2, 
> sum(c2) FROM base group by 1,2;
> explain extended select count(distinct c1) from base group by c2;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22874) Beeline unable to use credentials from URL.

2020-02-11 Thread Naveen Gangam (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035072#comment-17035072
 ] 

Naveen Gangam commented on HIVE-22874:
--

[~samuelan] [~vihangk1] [~ychena] Could you please review the patch please? 

> Beeline unable to use credentials from URL.
> ---
>
> Key: HIVE-22874
> URL: https://issues.apache.org/jira/browse/HIVE-22874
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22874.patch
>
>
> Beeline is not using password value from the URL. 
> Using LDAP Auth in this case, so the failure is on connect.
> bin/beeline -u 
> "jdbc:hive2://localhost:1/default;user=test1;password=test1" 
> On the server side in LdapAuthenticator, the principals come out to (via a 
> special debug logging)
> 2020-02-11T11:10:31,613  INFO [HiveServer2-Handler-Pool: Thread-67] 
> auth.LdapAuthenticationProviderImpl: Connecting to ldap as 
> user/password:test1:anonymous
> This bug may have been introduced via
> https://github.com/apache/hive/commit/749e831060381a8ae4775630efb72d5cd040652f
> pass = "" ( an empty string on this line) 
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L848
> but on this line of code, it checks to see it is null which will not be true 
> and hence it never picks up from the jdbc url
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L900
> It has another chance here but pass != null will always be true and never 
> goes into the else condition.
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L909



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22874) Beeline unable to use credentials from URL.

2020-02-11 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22874:
-
Status: Patch Available  (was: Open)

This issue also existed with the JDBC clients not just beeline. There was 
another area of code that has an issue.
https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/Utils.java#L382

This pattern attempts to parse key value pairs in the URL. The first regex 
grouping is supposed to find the key value in (key=value) format. So everything 
before the first "=" is the key, then a "=" and then a value optionally 
followed by a ";".

So by definition, a key cannot contain a "=" in it. However, values can contain 
"=" (for certain properties like passwords.

The problem with the current regex is that it matches the last "=" because of 
the first grouping (any character except ";")
"([^;]*)=([^;]*)[;]?"

So if you have key/value pairs like
key=value= (value is "value=")
key==value (value is "=value")
key=val==ue (value is "val==ue")

the regex groupings return (corresponding to input above)
key is "key=value" value = ""
key is "key=" value is "value"
key is "key=val=" value is "ue"

so instead the regex should consider everything before the first "=" as key and 
the rest until the end or a ";" as the value of the key.

Attached is patch that does that.

> Beeline unable to use credentials from URL.
> ---
>
> Key: HIVE-22874
> URL: https://issues.apache.org/jira/browse/HIVE-22874
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22874.patch
>
>
> Beeline is not using password value from the URL. 
> Using LDAP Auth in this case, so the failure is on connect.
> bin/beeline -u 
> "jdbc:hive2://localhost:1/default;user=test1;password=test1" 
> On the server side in LdapAuthenticator, the principals come out to (via a 
> special debug logging)
> 2020-02-11T11:10:31,613  INFO [HiveServer2-Handler-Pool: Thread-67] 
> auth.LdapAuthenticationProviderImpl: Connecting to ldap as 
> user/password:test1:anonymous
> This bug may have been introduced via
> https://github.com/apache/hive/commit/749e831060381a8ae4775630efb72d5cd040652f
> pass = "" ( an empty string on this line) 
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L848
> but on this line of code, it checks to see it is null which will not be true 
> and hence it never picks up from the jdbc url
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L900
> It has another chance here but pass != null will always be true and never 
> goes into the else condition.
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L909



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22263) MV has distinct on columns and query has count(distinct) on one of the columns, we do not trigger rewriting

2020-02-11 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22263:
---
Status: Patch Available  (was: In Progress)

> MV has distinct on columns and query has count(distinct) on one of the 
> columns, we do not trigger rewriting
> ---
>
> Key: HIVE-22263
> URL: https://issues.apache.org/jira/browse/HIVE-22263
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Materialized views
>Affects Versions: 3.1.2
>Reporter: Steve Carlin
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: count-distinct.sql, count-distinct2.sql
>
>
> Count distinct issues with materialized views.  Two scripts attached
> 1) 
> create materialized view base_aview stored as orc as select distinct c1 c1, 
> c2 c2 from base;
> explain extended select count(distinct c1) from base group by c2 ;
> 2)
> create materialized view base_aview stored as orc as SELECT c1 c1, c2 c2, 
> sum(c2) FROM base group by 1,2;
> explain extended select count(distinct c1) from base group by c2;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-22263) MV has distinct on columns and query has count(distinct) on one of the columns, we do not trigger rewriting

2020-02-11 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22263 started by Jesus Camacho Rodriguez.
--
> MV has distinct on columns and query has count(distinct) on one of the 
> columns, we do not trigger rewriting
> ---
>
> Key: HIVE-22263
> URL: https://issues.apache.org/jira/browse/HIVE-22263
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Materialized views
>Affects Versions: 3.1.2
>Reporter: Steve Carlin
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: count-distinct.sql, count-distinct2.sql
>
>
> Count distinct issues with materialized views.  Two scripts attached
> 1) 
> create materialized view base_aview stored as orc as select distinct c1 c1, 
> c2 c2 from base;
> explain extended select count(distinct c1) from base group by c2 ;
> 2)
> create materialized view base_aview stored as orc as SELECT c1 c1, c2 c2, 
> sum(c2) FROM base group by 1,2;
> explain extended select count(distinct c1) from base group by c2;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22263) MV has distinct on columns and query has count(distinct) on one of the columns, we do not trigger rewriting

2020-02-11 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-22263:
--

Assignee: Jesus Camacho Rodriguez

> MV has distinct on columns and query has count(distinct) on one of the 
> columns, we do not trigger rewriting
> ---
>
> Key: HIVE-22263
> URL: https://issues.apache.org/jira/browse/HIVE-22263
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Materialized views
>Affects Versions: 3.1.2
>Reporter: Steve Carlin
>Assignee: Jesus Camacho Rodriguez
>Priority: Critical
> Attachments: count-distinct.sql, count-distinct2.sql
>
>
> Count distinct issues with materialized views.  Two scripts attached
> 1) 
> create materialized view base_aview stored as orc as select distinct c1 c1, 
> c2 c2 from base;
> explain extended select count(distinct c1) from base group by c2 ;
> 2)
> create materialized view base_aview stored as orc as SELECT c1 c1, c2 c2, 
> sum(c2) FROM base group by 1,2;
> explain extended select count(distinct c1) from base group by c2;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22261) Support for materialized view rewriting with window functions

2020-02-11 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22261:
---
Summary: Support for materialized view rewriting with window functions  
(was: Add tests for materialized view rewriting with window functions)

> Support for materialized view rewriting with window functions
> -
>
> Key: HIVE-22261
> URL: https://issues.apache.org/jira/browse/HIVE-22261
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Materialized views, Tests
>Affects Versions: 3.1.2
>Reporter: Steve Carlin
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22261.patch, af2.sql
>
>
> Materialized views don't support window functions.  At a minimum, we should 
> print a friendlier message when the rewrite fails (it can still be created 
> with a "disable rewrite")
> Script is attached
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22874) Beeline unable to use credentials from URL.

2020-02-11 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22874:
-
Attachment: HIVE-22874.patch

> Beeline unable to use credentials from URL.
> ---
>
> Key: HIVE-22874
> URL: https://issues.apache.org/jira/browse/HIVE-22874
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22874.patch
>
>
> Beeline is not using password value from the URL. 
> Using LDAP Auth in this case, so the failure is on connect.
> bin/beeline -u 
> "jdbc:hive2://localhost:1/default;user=test1;password=test1" 
> On the server side in LdapAuthenticator, the principals come out to (via a 
> special debug logging)
> 2020-02-11T11:10:31,613  INFO [HiveServer2-Handler-Pool: Thread-67] 
> auth.LdapAuthenticationProviderImpl: Connecting to ldap as 
> user/password:test1:anonymous
> This bug may have been introduced via
> https://github.com/apache/hive/commit/749e831060381a8ae4775630efb72d5cd040652f
> pass = "" ( an empty string on this line) 
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L848
> but on this line of code, it checks to see it is null which will not be true 
> and hence it never picks up from the jdbc url
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L900
> It has another chance here but pass != null will always be true and never 
> goes into the else condition.
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L909



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22844) Validate cm configs, add retries in fs apis for cm

2020-02-11 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22844:
---
Status: In Progress  (was: Patch Available)

> Validate cm configs, add retries in fs apis for cm
> --
>
> Key: HIVE-22844
> URL: https://issues.apache.org/jira/browse/HIVE-22844
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22844.patch, HIVE-22844.patch, HIVE-22844.patch, 
> HIVE-22844.patch, HIVE-22844.patch, HIVE-22844.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # Retry create cm root logic
>  # Rename encryptionZones to cmRootLocations to be more accurate
>  # Check cmRootEncrypted.isAbsolute() first before we go for creating anything
>  # Validate fallbackNonEncryptedCmRootDir if it's really not encrypted
>  # Refactor deleteTableData logic



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22844) Validate cm configs, add retries in fs apis for cm

2020-02-11 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22844:
---
Attachment: HIVE-22844.patch
Status: Patch Available  (was: In Progress)

> Validate cm configs, add retries in fs apis for cm
> --
>
> Key: HIVE-22844
> URL: https://issues.apache.org/jira/browse/HIVE-22844
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22844.patch, HIVE-22844.patch, HIVE-22844.patch, 
> HIVE-22844.patch, HIVE-22844.patch, HIVE-22844.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # Retry create cm root logic
>  # Rename encryptionZones to cmRootLocations to be more accurate
>  # Check cmRootEncrypted.isAbsolute() first before we go for creating anything
>  # Validate fallbackNonEncryptedCmRootDir if it's really not encrypted
>  # Refactor deleteTableData logic



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22867) Add partitioning support to VectorTopNKeyOperator

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035047#comment-17035047
 ] 

Hive QA commented on HIVE-22867:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993117/HIVE-22867.1.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 17990 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_case_when_conversion]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_coalesce]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_expressions]
 (batchId=180)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets_limit]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_concat]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_13]
 (batchId=179)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_7]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_8]
 (batchId=178)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0]
 (batchId=184)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_short_regress]
 (batchId=178)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testAsyncSessionInitFailures
 (batchId=349)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20568/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20568/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20568/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993117 - PreCommit-HIVE-Build

> Add partitioning support to VectorTopNKeyOperator 
> --
>
> Key: HIVE-22867
> URL: https://issues.apache.org/jira/browse/HIVE-22867
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-22867.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22867) Add partitioning support to VectorTopNKeyOperator

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035026#comment-17035026
 ] 

Hive QA commented on HIVE-22867:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
45s{color} | {color:blue} ql in master has 1532 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
48s{color} | {color:red} ql: The patch generated 2 new + 408 unchanged - 0 
fixed = 410 total (was 408) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
57s{color} | {color:red} ql generated 2 new + 1532 unchanged - 0 fixed = 1534 
total (was 1532) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  org.apache.hadoop.hive.ql.plan.VectorTopNKeyDesc.getPartitionKeyColumns() 
may expose internal representation by returning 
VectorTopNKeyDesc.partitionKeyColumns  At VectorTopNKeyDesc.java:by returning 
VectorTopNKeyDesc.partitionKeyColumns  At VectorTopNKeyDesc.java:[line 42] |
|  |  
org.apache.hadoop.hive.ql.plan.VectorTopNKeyDesc.setPartitionKeyColumns(VectorExpression[])
 may expose internal representation by storing an externally mutable object 
into VectorTopNKeyDesc.partitionKeyColumns  At VectorTopNKeyDesc.java:by 
storing an externally mutable object into VectorTopNKeyDesc.partitionKeyColumns 
 At VectorTopNKeyDesc.java:[line 46] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20568/dev-support/hive-personality.sh
 |
| git revision | master / 8f46884 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20568/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20568/yetus/new-findbugs-ql.html
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20568/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add partitioning support to VectorTopNKeyOperator 
> --
>
> Key: HIVE-22867
> URL: https://issues.apache.org/jira/browse/HIVE-22867
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-22867.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22863) Commit compaction txn if it is opened but compaction is skipped

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17035020#comment-17035020
 ] 

Hive QA commented on HIVE-22863:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993114/HIVE-22863.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17992 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20567/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20567/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20567/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993114 - PreCommit-HIVE-Build

> Commit compaction txn if it is opened but compaction is skipped
> ---
>
> Key: HIVE-22863
> URL: https://issues.apache.org/jira/browse/HIVE-22863
> Project: Hive
>  Issue Type: Bug
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-22863.01.patch, HIVE-22863.02.patch
>
>
> Currently if a table does not have enough directories to compact, compaction 
> is skipped and the compaction is either (a) marked ready for cleaning or (b) 
> marked compacted. However, the txn the compaction runs in is never committed, 
> it remains open, so TXNS and TXN_COMPONENTS will never be cleared of 
> information about the attempted compaction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22863) Commit compaction txn if it is opened but compaction is skipped

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034992#comment-17034992
 ] 

Hive QA commented on HIVE-22863:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 1532 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20567/dev-support/hive-personality.sh
 |
| git revision | master / 8f46884 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20567/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Commit compaction txn if it is opened but compaction is skipped
> ---
>
> Key: HIVE-22863
> URL: https://issues.apache.org/jira/browse/HIVE-22863
> Project: Hive
>  Issue Type: Bug
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-22863.01.patch, HIVE-22863.02.patch
>
>
> Currently if a table does not have enough directories to compact, compaction 
> is skipped and the compaction is either (a) marked ready for cleaning or (b) 
> marked compacted. However, the txn the compaction runs in is never committed, 
> it remains open, so TXNS and TXN_COMPONENTS will never be cleared of 
> information about the attempted compaction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22815) reduce the unnecessary file system object creation in MROutput

2020-02-11 Thread Richard Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zhang updated HIVE-22815:
-
Attachment: Hive-22815.5.patch

> reduce the unnecessary file system object creation in MROutput 
> ---
>
> Key: HIVE-22815
> URL: https://issues.apache.org/jira/browse/HIVE-22815
> Project: Hive
>  Issue Type: Bug
>Reporter: Richard Zhang
>Assignee: Richard Zhang
>Priority: Major
> Attachments: Hive-22815.2.patch, Hive-22815.5.patch
>
>
> MROutput generates unnecessary file system object which may create long 
> latency in Cloud environment. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22815) reduce the unnecessary file system object creation in MROutput

2020-02-11 Thread Richard Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zhang updated HIVE-22815:
-
Attachment: (was: Hive-22815.4.patch)

> reduce the unnecessary file system object creation in MROutput 
> ---
>
> Key: HIVE-22815
> URL: https://issues.apache.org/jira/browse/HIVE-22815
> Project: Hive
>  Issue Type: Bug
>Reporter: Richard Zhang
>Assignee: Richard Zhang
>Priority: Major
> Attachments: Hive-22815.2.patch, Hive-22815.5.patch
>
>
> MROutput generates unnecessary file system object which may create long 
> latency in Cloud environment. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22815) reduce the unnecessary file system object creation in MROutput

2020-02-11 Thread Richard Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zhang updated HIVE-22815:
-
Attachment: Hive-22815.4.patch

> reduce the unnecessary file system object creation in MROutput 
> ---
>
> Key: HIVE-22815
> URL: https://issues.apache.org/jira/browse/HIVE-22815
> Project: Hive
>  Issue Type: Bug
>Reporter: Richard Zhang
>Assignee: Richard Zhang
>Priority: Major
> Attachments: Hive-22815.2.patch, Hive-22815.4.patch
>
>
> MROutput generates unnecessary file system object which may create long 
> latency in Cloud environment. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22862) Remove unnecessary calls to isEnoughToCompact

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034974#comment-17034974
 ] 

Hive QA commented on HIVE-22862:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993112/HIVE-22862.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17990 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20566/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20566/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20566/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993112 - PreCommit-HIVE-Build

> Remove unnecessary calls to isEnoughToCompact
> -
>
> Key: HIVE-22862
> URL: https://issues.apache.org/jira/browse/HIVE-22862
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Attachments: HIVE-22862.01.patch, HIVE-22862.01.patch
>
>
> QueryCompactor.Util#isEnoughToCompact is called in Worker#run once before any 
> sort of compaction is run, after this it is called in 3 other places during 
> compaction unnecessarily. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22815) reduce the unnecessary file system object creation in MROutput

2020-02-11 Thread Richard Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zhang updated HIVE-22815:
-
Attachment: (was: HIVE-22815.3.patch)

> reduce the unnecessary file system object creation in MROutput 
> ---
>
> Key: HIVE-22815
> URL: https://issues.apache.org/jira/browse/HIVE-22815
> Project: Hive
>  Issue Type: Bug
>Reporter: Richard Zhang
>Assignee: Richard Zhang
>Priority: Major
> Attachments: Hive-22815.2.patch
>
>
> MROutput generates unnecessary file system object which may create long 
> latency in Cloud environment. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22815) reduce the unnecessary file system object creation in MROutput

2020-02-11 Thread Richard Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Zhang updated HIVE-22815:
-
Attachment: HIVE-22815.3.patch

> reduce the unnecessary file system object creation in MROutput 
> ---
>
> Key: HIVE-22815
> URL: https://issues.apache.org/jira/browse/HIVE-22815
> Project: Hive
>  Issue Type: Bug
>Reporter: Richard Zhang
>Assignee: Richard Zhang
>Priority: Major
> Attachments: HIVE-22815.3.patch, Hive-22815.2.patch
>
>
> MROutput generates unnecessary file system object which may create long 
> latency in Cloud environment. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22862) Remove unnecessary calls to isEnoughToCompact

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034955#comment-17034955
 ] 

Hive QA commented on HIVE-22862:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
53s{color} | {color:blue} ql in master has 1532 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20566/dev-support/hive-personality.sh
 |
| git revision | master / 8f46884 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20566/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Remove unnecessary calls to isEnoughToCompact
> -
>
> Key: HIVE-22862
> URL: https://issues.apache.org/jira/browse/HIVE-22862
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Attachments: HIVE-22862.01.patch, HIVE-22862.01.patch
>
>
> QueryCompactor.Util#isEnoughToCompact is called in Worker#run once before any 
> sort of compaction is run, after this it is called in 3 other places during 
> compaction unnecessarily. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22877) Wrong decimal boundary for casting to Decimal64

2020-02-11 Thread Mustafa Iman (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034950#comment-17034950
 ] 

Mustafa Iman commented on HIVE-22877:
-

[~rameshkumar] [~rizhang] Can you review?

> Wrong decimal boundary for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22877.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `scale` is smaller than or equal to 18. Precision is irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22877) Wrong decimal boundary for casting to Decimal64

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22877?focusedWorklogId=385579=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385579
 ]

ASF GitHub Bot logged work on HIVE-22877:
-

Author: ASF GitHub Bot
Created on: 12/Feb/20 00:38
Start Date: 12/Feb/20 00:38
Worklog Time Spent: 10m 
  Work Description: mustafaiman commented on pull request #905: HIVE-22877: 
Wrong decimal boundary for casting to Decimal64
URL: https://github.com/apache/hive/pull/905
 
 
   During vectorization, decimal fields that are obtained via generic udfs are 
cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
compares the source column's `scale + precision` to 18(maximum number of digits 
that can be represented by a long). A decimal can fit in a long as long as its 
`scale` is smaller than or equal to 18. Precision is irrelevant.
   
   Since vectorized generic udf expression takes precision into account, it 
computes wrong output column vector: Decimal instead of Decimal64. This in turn 
causes ClassCastException down the operator chain.
   
   Change-Id: I100cb3fbbc10fb71e8a7a8cd33f5788018388ddb
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385579)
Remaining Estimate: 0h
Time Spent: 10m

> Wrong decimal boundary for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22877.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `scale` is smaller than or equal to 18. Precision is irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22877) Wrong decimal boundary for casting to Decimal64

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22877:
--
Labels: pull-request-available  (was: )

> Wrong decimal boundary for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22877.patch
>
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `scale` is smaller than or equal to 18. Precision is irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22877) Wrong decimal boundary for casting to Decimal64

2020-02-11 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22877:

Attachment: HIVE-22877.patch
Status: Patch Available  (was: In Progress)

> Wrong decimal boundary for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
> Attachments: HIVE-22877.patch
>
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `scale` is smaller than or equal to 18. Precision is irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query

2020-02-11 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21778:
---
Status: Patch Available  (was: Open)

> CBO: "Struct is not null" gets evaluated as `nullable` always causing filter 
> miss in the query
> --
>
> Key: HIVE-21778
> URL: https://issues.apache.org/jira/browse/HIVE-21778
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5, 4.0.0
>Reporter: Rajesh Balamohan
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, test_null.q, 
> test_null.q.out
>
>
> {noformat}
> drop table if exists test_struct;
> CREATE external TABLE test_struct
> (
>   f1 string,
>   demo_struct struct,
>   datestr string
> );
> set hive.cbo.enable=true;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note 
> that demo_struct filter is not added here
>   Filter Operator
> predicate: (datestr = '2019-01-01') (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> set hive.cbo.enable=false;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean) <- Note that demo_struct filter is added when CBO is 
> turned off
>   Filter Operator
> predicate: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> {noformat}
> In CalcitePlanner::genFilterRelNode, the following code misses to evaluate 
> this filter. 
> {noformat}
> RexNode factoredFilterExpr = RexUtil
>   .pullFactors(cluster.getRexBuilder(), convertedFilterExpr);
> {noformat}
> Note that even if we add `demo_struct.f1` it would end up pushing the filter 
> correctly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query

2020-02-11 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21778:
---
Attachment: HIVE-21778.2.patch

> CBO: "Struct is not null" gets evaluated as `nullable` always causing filter 
> miss in the query
> --
>
> Key: HIVE-21778
> URL: https://issues.apache.org/jira/browse/HIVE-21778
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 4.0.0, 2.3.5
>Reporter: Rajesh Balamohan
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, test_null.q, 
> test_null.q.out
>
>
> {noformat}
> drop table if exists test_struct;
> CREATE external TABLE test_struct
> (
>   f1 string,
>   demo_struct struct,
>   datestr string
> );
> set hive.cbo.enable=true;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note 
> that demo_struct filter is not added here
>   Filter Operator
> predicate: (datestr = '2019-01-01') (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> set hive.cbo.enable=false;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean) <- Note that demo_struct filter is added when CBO is 
> turned off
>   Filter Operator
> predicate: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> {noformat}
> In CalcitePlanner::genFilterRelNode, the following code misses to evaluate 
> this filter. 
> {noformat}
> RexNode factoredFilterExpr = RexUtil
>   .pullFactors(cluster.getRexBuilder(), convertedFilterExpr);
> {noformat}
> Note that even if we add `demo_struct.f1` it would end up pushing the filter 
> correctly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22763) 0 is accepted in 12-hour format during timestamp cast

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034948#comment-17034948
 ] 

Hive QA commented on HIVE-22763:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993111/HIVE-22763.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17990 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation3 (batchId=291)
org.apache.hive.service.server.TestInformationSchemaWithPrivilege.test 
(batchId=286)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20565/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20565/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20565/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993111 - PreCommit-HIVE-Build

> 0 is accepted in 12-hour format during timestamp cast
> -
>
> Key: HIVE-22763
> URL: https://issues.apache.org/jira/browse/HIVE-22763
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22763.01.patch, HIVE-22763.01.patch, 
> HIVE-22763.01.patch, HIVE-22763.01.patch, HIVE-22763.01.patch
>
>
> Having a timestamp string in 12-hour format can be parsed if the hour is 0, 
> however, based on the [design 
> document|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit],
>  it should be rejected.
> h3. How to reproduce
> Run {code}select cast("2020-01-01 0 am 00" as timestamp format "-mm-dd 
> hh12 p.m. ss"){code}
> It shouldn' t be parsed, as the hour component is 0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query

2020-02-11 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21778:
---
Status: Open  (was: Patch Available)

> CBO: "Struct is not null" gets evaluated as `nullable` always causing filter 
> miss in the query
> --
>
> Key: HIVE-21778
> URL: https://issues.apache.org/jira/browse/HIVE-21778
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5, 4.0.0
>Reporter: Rajesh Balamohan
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, test_null.q, 
> test_null.q.out
>
>
> {noformat}
> drop table if exists test_struct;
> CREATE external TABLE test_struct
> (
>   f1 string,
>   demo_struct struct,
>   datestr string
> );
> set hive.cbo.enable=true;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note 
> that demo_struct filter is not added here
>   Filter Operator
> predicate: (datestr = '2019-01-01') (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> set hive.cbo.enable=false;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean) <- Note that demo_struct filter is added when CBO is 
> turned off
>   Filter Operator
> predicate: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> {noformat}
> In CalcitePlanner::genFilterRelNode, the following code misses to evaluate 
> this filter. 
> {noformat}
> RexNode factoredFilterExpr = RexUtil
>   .pullFactors(cluster.getRexBuilder(), convertedFilterExpr);
> {noformat}
> Note that even if we add `demo_struct.f1` it would end up pushing the filter 
> correctly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-22877) Wrong decimal boundary for casting to Decimal64

2020-02-11 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22877 started by Mustafa Iman.
---
> Wrong decimal boundary for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `scale` is smaller than or equal to 18. Precision is irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22877) Wrong decimal boundary for casting to Decimal64

2020-02-11 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman reassigned HIVE-22877:
---


> Wrong decimal boundary for casting to Decimal64
> ---
>
> Key: HIVE-22877
> URL: https://issues.apache.org/jira/browse/HIVE-22877
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>
> During vectorization, decimal fields that are obtained via generic udfs are 
> cast to Decimal64 in some circumstances. For decimal to decimal64 cast, hive 
> compares the source column's `scale + precision` to 18(maximum number of 
> digits that can be represented by a long). A decimal can fit in a long as 
> long as its `scale` is smaller than or equal to 18. Precision is irrelevant.
> Since vectorized generic udf expression takes precision into account, it 
> computes wrong output column vector: Decimal instead of Decimal64. This in 
> turn causes ClassCastException down the operator chain.
> Below query fails with class cast exception:
>  
> {code:java}
> create table mini_store
> (
>  s_store_sk int,
>  s_store_id string
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> create table mini_sales
> (
>  ss_store_sk int,
>  ss_quantity int,
>  ss_sales_price decimal(7,2)
> )
> row format delimited fields terminated by '\t'
> STORED AS ORC;
> insert into mini_store values (1, 'store');
> insert into mini_sales values (1, 2, 1.2);
> select s_store_id, coalesce(ss_sales_price*ss_quantity,0) sumsales
> from mini_sales, mini_store where ss_store_sk = s_store_sk
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22763) 0 is accepted in 12-hour format during timestamp cast

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034929#comment-17034929
 ] 

Hive QA commented on HIVE-22763:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20565/dev-support/hive-personality.sh
 |
| git revision | master / 8f46884 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20565/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> 0 is accepted in 12-hour format during timestamp cast
> -
>
> Key: HIVE-22763
> URL: https://issues.apache.org/jira/browse/HIVE-22763
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22763.01.patch, HIVE-22763.01.patch, 
> HIVE-22763.01.patch, HIVE-22763.01.patch, HIVE-22763.01.patch
>
>
> Having a timestamp string in 12-hour format can be parsed if the hour is 0, 
> however, based on the [design 
> document|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit],
>  it should be rejected.
> h3. How to reproduce
> Run {code}select cast("2020-01-01 0 am 00" as timestamp format "-mm-dd 
> hh12 p.m. ss"){code}
> It shouldn' t be parsed, as the hour component is 0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22610) Minor compaction for MM (insert-only) tables

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034921#comment-17034921
 ] 

Hive QA commented on HIVE-22610:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993110/HIVE-22610.03.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18002 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20564/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20564/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20564/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993110 - PreCommit-HIVE-Build

> Minor compaction for MM (insert-only) tables
> 
>
> Key: HIVE-22610
> URL: https://issues.apache.org/jira/browse/HIVE-22610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-22610.01.patch, HIVE-22610.02.patch, 
> HIVE-22610.03.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22850) Optimise lock acquisition in TxnHandler

2020-02-11 Thread Rajesh Balamohan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034905#comment-17034905
 ] 

Rajesh Balamohan commented on HIVE-22850:
-

[~zchovan], It is not newly introduced code in TxnHandler; 
[https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L4432]
 in the existing "apache master" has the same issue. Batching for the entire 
set of queries in TxnHandler has to be done in separate ticket. Since it is 
mainly database checks, users may not have hit this issue in oracle yet (i.e 
having 2000 databases in hive with oracle).

> Optimise lock acquisition in TxnHandler
> ---
>
> Key: HIVE-22850
> URL: https://issues.apache.org/jira/browse/HIVE-22850
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Rajesh Balamohan
>Priority: Major
> Attachments: HIVE-22850.1.patch, HIVE-22850.2.patch, 
> HIVE-22850.3.patch, Screenshot 2020-02-07 at 4.14.51 AM.jpg, jumpTableInfo.png
>
>
> With concurrent queries, time taken for lock acquisition increases 
> substantially. As part of lock acquisition in the query, 
> {{TxnHandler::checkLock}} gets invoked. This involves getting a mutex and 
> compare the locks being requested for, with that of existing locks in 
> {{HIVE_LOCKS}} table.
> With concurrent queries, time taken to do this check increase and this 
> significantly increases the time taken for getting mutex for other threads 
> (due to select for update). In a synthetic workload, it was in the order of 
> 10+ seconds. This codepath can be optimized when all lock requests are 
> SHARED_READ.
>  
>  
> !Screenshot 2020-02-07 at 4.14.51 AM.jpg|width=743,height=348!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22825) Reduce directory lookup cost for acid tables

2020-02-11 Thread Rajesh Balamohan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034904#comment-17034904
 ] 

Rajesh Balamohan commented on HIVE-22825:
-

Thanks [~ashutoshc] . Reuploading same patch for tests.

> Reduce directory lookup cost for acid tables
> 
>
> Key: HIVE-22825
> URL: https://issues.apache.org/jira/browse/HIVE-22825
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-22825.1.patch, HIVE-22825.2.patch, 
> HIVE-22825.3.patch, HIVE-22825.4.patch, HIVE-22825.5.patch, HIVE-22825.6.patch
>
>
> With objectstores, directory lookup costs are expensive. For acid tables, it 
> would be good to have a directory cache to reduce number of lookup calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22825) Reduce directory lookup cost for acid tables

2020-02-11 Thread Rajesh Balamohan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-22825:

Attachment: HIVE-22825.6.patch

> Reduce directory lookup cost for acid tables
> 
>
> Key: HIVE-22825
> URL: https://issues.apache.org/jira/browse/HIVE-22825
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-22825.1.patch, HIVE-22825.2.patch, 
> HIVE-22825.3.patch, HIVE-22825.4.patch, HIVE-22825.5.patch, HIVE-22825.6.patch
>
>
> With objectstores, directory lookup costs are expensive. For acid tables, it 
> would be good to have a directory cache to reduce number of lookup calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22610) Minor compaction for MM (insert-only) tables

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034891#comment-17034891
 ] 

Hive QA commented on HIVE-22610:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
0s{color} | {color:blue} ql in master has 1532 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 8 
unchanged - 3 fixed = 8 total (was 11) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20564/dev-support/hive-personality.sh
 |
| git revision | master / 8f46884 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20564/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Minor compaction for MM (insert-only) tables
> 
>
> Key: HIVE-22610
> URL: https://issues.apache.org/jira/browse/HIVE-22610
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-22610.01.patch, HIVE-22610.02.patch, 
> HIVE-22610.03.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22781) Add ability to immediately execute a scheduled query

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034855#comment-17034855
 ] 

Hive QA commented on HIVE-22781:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993107/HIVE-22781.05.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17991 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20563/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20563/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20563/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993107 - PreCommit-HIVE-Build

> Add ability to immediately execute a scheduled query
> 
>
> Key: HIVE-22781
> URL: https://issues.apache.org/jira/browse/HIVE-22781
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22781.01.patch, HIVE-22781.02.patch, 
> HIVE-22781.03.patch, HIVE-22781.04.patch, HIVE-22781.04.patch, 
> HIVE-22781.04.patch, HIVE-22781.05.patch, HIVE-22781.05.patch
>
>
> there are some differences when the system invokes the scheduled query / the 
> user executes it in a shell - forcing the schedule to run might be usefull in 
> developing/debugging schedules
> something like:
> {code}
> alter scheduled query a execute
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22781) Add ability to immediately execute a scheduled query

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034826#comment-17034826
 ] 

Hive QA commented on HIVE-22781:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
0s{color} | {color:blue} parser in master has 3 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
16s{color} | {color:blue} standalone-metastore/metastore-server in master has 
181 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
56s{color} | {color:blue} ql in master has 1532 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 2 new + 12 unchanged - 0 fixed 
= 14 total (was 12) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20563/dev-support/hive-personality.sh
 |
| git revision | master / 8f46884 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20563/yetus/diff-checkstyle-ql.txt
 |
| modules | C: parser standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20563/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add ability to immediately execute a scheduled query
> 
>
> Key: HIVE-22781
> URL: https://issues.apache.org/jira/browse/HIVE-22781
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22781.01.patch, HIVE-22781.02.patch, 
> HIVE-22781.03.patch, HIVE-22781.04.patch, HIVE-22781.04.patch, 
> HIVE-22781.04.patch, HIVE-22781.05.patch, HIVE-22781.05.patch
>
>
> there are some differences when the system invokes the scheduled query / the 
> user executes it in a shell - forcing the schedule to run might be usefull in 
> developing/debugging schedules
> something like:
> {code}
> alter scheduled query a execute
> {code}



--
This message was sent 

[jira] [Commented] (HIVE-22876) Do not enforce package-info.java files by checkstyle

2020-02-11 Thread Miklos Gergely (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034805#comment-17034805
 ] 

Miklos Gergely commented on HIVE-22876:
---

Patch is ready, beside updating the checkstyle.xml I'm also planning to remove 
the many package-info.java files that I've created because of this rule.

> Do not enforce package-info.java files by checkstyle
> 
>
> Key: HIVE-22876
> URL: https://issues.apache.org/jira/browse/HIVE-22876
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22876.01.patch
>
>
> Currently checkstyle enforces every pacakge to have a package-info.java file. 
> This is not really followed by anyone, so it can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22876) Do not enforce package-info.java files by checkstyle

2020-02-11 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22876:
--
Status: Patch Available  (was: Open)

> Do not enforce package-info.java files by checkstyle
> 
>
> Key: HIVE-22876
> URL: https://issues.apache.org/jira/browse/HIVE-22876
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22876.01.patch
>
>
> Currently checkstyle enforces every pacakge to have a package-info.java file. 
> This is not really followed by anyone, so it can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22876) Do not enforce package-info.java files by checkstyle

2020-02-11 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22876:
--
Attachment: HIVE-22876.01.patch

> Do not enforce package-info.java files by checkstyle
> 
>
> Key: HIVE-22876
> URL: https://issues.apache.org/jira/browse/HIVE-22876
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22876.01.patch
>
>
> Currently checkstyle enforces every pacakge to have a package-info.java file. 
> This is not really followed by anyone, so it can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22876) Do not enforce package-info.java files by checkstyle

2020-02-11 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely reassigned HIVE-22876:
-


> Do not enforce package-info.java files by checkstyle
> 
>
> Key: HIVE-22876
> URL: https://issues.apache.org/jira/browse/HIVE-22876
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 4.0.0
>
>
> Currently checkstyle enforces every pacakge to have a package-info.java file. 
> This is not really followed by anyone, so it can be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22844) Validate cm configs, add retries in fs apis for cm

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034794#comment-17034794
 ] 

Hive QA commented on HIVE-22844:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993102/HIVE-22844.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 17992 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeader.testHouseKeepingThreadExistence
 (batchId=247)
org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence
 (batchId=249)
org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingNonLeader.testHouseKeepingThreadExistence
 (batchId=249)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20562/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20562/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20562/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993102 - PreCommit-HIVE-Build

> Validate cm configs, add retries in fs apis for cm
> --
>
> Key: HIVE-22844
> URL: https://issues.apache.org/jira/browse/HIVE-22844
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22844.patch, HIVE-22844.patch, HIVE-22844.patch, 
> HIVE-22844.patch, HIVE-22844.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # Retry create cm root logic
>  # Rename encryptionZones to cmRootLocations to be more accurate
>  # Check cmRootEncrypted.isAbsolute() first before we go for creating anything
>  # Validate fallbackNonEncryptedCmRootDir if it's really not encrypted
>  # Refactor deleteTableData logic



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385419=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385419
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 19:54
Start Date: 11/Feb/20 19:54
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377863861
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/TableInfoUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.table.partition.PartitionUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Utilities used by table information DDL commands.
+ */
+public final class TableInfoUtils {
+  private TableInfoUtils() {
+throw new UnsupportedOperationException("TableInfoUtils should not be 
instantiated");
+  }
+
+  public static void validateDatabase(Hive db, String databaseName) throws 
SemanticException {
 
 Review comment:
   Fixed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385419)
Time Spent: 2h 20m  (was: 2h 10m)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch, 
> HIVE-22747.03.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385420=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385420
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 19:55
Start Date: 11/Feb/20 19:55
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377864007
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/TableInfoUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.table.partition.PartitionUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Utilities used by table information DDL commands.
+ */
+public final class TableInfoUtils {
 
 Review comment:
   Fixed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385420)
Time Spent: 2.5h  (was: 2h 20m)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch, 
> HIVE-22747.03.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385418=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385418
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 19:54
Start Date: 11/Feb/20 19:54
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377863758
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/TableInfoUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.table.partition.PartitionUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Utilities used by table information DDL commands.
+ */
+public final class TableInfoUtils {
+  private TableInfoUtils() {
+throw new UnsupportedOperationException("TableInfoUtils should not be 
instantiated");
+  }
+
+  public static void validateDatabase(Hive db, String databaseName) throws 
SemanticException {
+try {
+  if (!db.databaseExists(databaseName)) {
+throw new 
SemanticException(ErrorMsg.DATABASE_NOT_EXISTS.getMsg(databaseName));
+  }
+} catch (HiveException e) {
+  throw new 
SemanticException(ErrorMsg.DATABASE_NOT_EXISTS.getMsg(databaseName), e);
+}
+  }
+
+  public static void validateTable(Hive db, Table table, Map 
partSpec) throws SemanticException {
+if (partSpec != null) {
 
 Review comment:
   Fixed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385418)
Time Spent: 2h 10m  (was: 2h)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch, 
> HIVE-22747.03.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385408=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385408
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 19:54
Start Date: 11/Feb/20 19:54
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377863674
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/DescTableAnalyzer.java
 ##
 @@ -0,0 +1,210 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.common.TableName;
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.QueryState;
+import org.apache.hadoop.hive.ql.ddl.DDLWork;
+import org.apache.hadoop.hive.ql.ddl.table.info.TableInfoUtils;
+import org.apache.hadoop.hive.ql.ddl.DDLSemanticAnalyzerFactory.DDLType;
+import org.apache.hadoop.hive.ql.ddl.DDLUtils;
+import org.apache.hadoop.hive.ql.exec.Task;
+import org.apache.hadoop.hive.ql.exec.TaskFactory;
+import org.apache.hadoop.hive.ql.hooks.ReadEntity;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.InvalidTableException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.ASTNode;
+import org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer;
+import org.apache.hadoop.hive.ql.parse.HiveParser;
+import org.apache.hadoop.hive.ql.parse.HiveTableName;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+import org.apache.hadoop.hive.ql.session.SessionState;
+
+/**
+ * Analyzer for table describing commands.
+ *
+ * A query like this will generate a tree as follows
+ *   "describe formatted default.maptable partition (b=100) id;"
+ * TOK_TABTYPE
+ *   TOK_TABNAME --> root for tablename, 2 child nodes mean DB specified
+ * default
+ * maptable
+ *   TOK_PARTSPEC  --> root node for partition spec. else columnName
+ * TOK_PARTVAL
+ *   b
+ *   100
+ *   id   --> root node for columnName
+ * formatted
+ */
+@DDLType(type=HiveParser.TOK_DESCTABLE)
+public class DescTableAnalyzer extends BaseSemanticAnalyzer {
+  public DescTableAnalyzer(QueryState queryState) throws SemanticException {
+super(queryState);
+  }
+
+  @Override
+  public void analyzeInternal(ASTNode root) throws SemanticException {
+ctx.setResFile(ctx.getLocalTmpPath());
+
+ASTNode tableTypeExpr = (ASTNode) root.getChild(0);
+
+TableName tableName = getTableName(tableTypeExpr);
+Table table = getTable(tableName);
+
+// process the second child,if exists, node to get partition spec(s)
+Map partitionSpec = getPartitionSpec(db, tableTypeExpr, 
tableName);
+TableInfoUtils.validateTable(db, table, partitionSpec);
+
+// process the third child node,if exists, to get partition spec(s)
+String columnPath = getColumnPath(db, tableTypeExpr, tableName, 
partitionSpec);
+
+boolean showColStats = false;
+boolean isFormatted = false;
+boolean isExt = false;
+if (root.getChildCount() == 2) {
+  int descOptions = root.getChild(1).getType();
+  isFormatted = descOptions == HiveParser.KW_FORMATTED;
+  isExt = descOptions == HiveParser.KW_EXTENDED;
+  // in case of "DESCRIBE FORMATTED tablename column_name" statement, 
colPath will contain tablename.column_name.
+  // If column_name is not specified colPath will be equal to tableName.
+  // This is how we can differentiate if we are describing a table or 
column.
+  if (columnPath != null && isFormatted) {
+showColStats = true;
+  }
+}
+
+inputs.add(new ReadEntity(table));
+
+DescTableDesc desc = new DescTableDesc(ctx.getResFile(), tableName, 
partitionSpec, columnPath, isExt, isFormatted);

[jira] [Updated] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22747:
--
Attachment: HIVE-22747.03.patch

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch, 
> HIVE-22747.03.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22844) Validate cm configs, add retries in fs apis for cm

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034751#comment-17034751
 ] 

Hive QA commented on HIVE-22844:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
51s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
16s{color} | {color:blue} standalone-metastore/metastore-server in master has 
181 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
47s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20562/dev-support/hive-personality.sh
 |
| git revision | master / 8f46884 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20562/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Validate cm configs, add retries in fs apis for cm
> --
>
> Key: HIVE-22844
> URL: https://issues.apache.org/jira/browse/HIVE-22844
> Project: Hive
>  Issue Type: Bug
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22844.patch, HIVE-22844.patch, HIVE-22844.patch, 
> HIVE-22844.patch, HIVE-22844.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # Retry create cm root logic
>  # Rename encryptionZones to cmRootLocations to be more accurate
>  # Check cmRootEncrypted.isAbsolute() first before we go for creating anything
>  # Validate fallbackNonEncryptedCmRootDir if it's really not encrypted
>  # Refactor deleteTableData logic



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385374=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385374
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 19:07
Start Date: 11/Feb/20 19:07
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377838392
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/TableInfoUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.table.partition.PartitionUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Utilities used by table information DDL commands.
+ */
+public final class TableInfoUtils {
+  private TableInfoUtils() {
+throw new UnsupportedOperationException("TableInfoUtils should not be 
instantiated");
+  }
+
+  public static void validateDatabase(Hive db, String databaseName) throws 
SemanticException {
+try {
+  if (!db.databaseExists(databaseName)) {
+throw new 
SemanticException(ErrorMsg.DATABASE_NOT_EXISTS.getMsg(databaseName));
+  }
+} catch (HiveException e) {
+  throw new 
SemanticException(ErrorMsg.DATABASE_NOT_EXISTS.getMsg(databaseName), e);
+}
+  }
+
+  public static void validateTable(Hive db, Table table, Map 
partSpec) throws SemanticException {
+if (partSpec != null) {
 
 Review comment:
   Actually no need to move it to Hive, PartitionUtils can be called directly 
from wherever it is needed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385374)
Time Spent: 1h 50m  (was: 1h 40m)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22589) Add storage support for ProlepticCalendar in ORC, Parquet, and Avro

2020-02-11 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22589:
---
Attachment: HIVE-22589.07.patch

> Add storage support for ProlepticCalendar in ORC, Parquet, and Avro
> ---
>
> Key: HIVE-22589
> URL: https://issues.apache.org/jira/browse/HIVE-22589
> Project: Hive
>  Issue Type: Bug
>  Components: Avro, ORC, Parquet
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0, 3.2.0, 3.1.3
>
> Attachments: HIVE-22589.01.patch, HIVE-22589.02.patch, 
> HIVE-22589.03.patch, HIVE-22589.04.patch, HIVE-22589.05.patch, 
> HIVE-22589.06.patch, HIVE-22589.07.patch, HIVE-22589.07.patch, 
> HIVE-22589.07.patch, HIVE-22589.07.patch, HIVE-22589.patch, HIVE-22589.patch
>
>
> Hive recently moved its processing to the proleptic calendar, which has 
> created some issues for users who have dates before 1580 AD.
> HIVE-22405 extended the column vectors for times & dates to encode which 
> calendar they are using.
> This issue is to support proleptic calendar in ORC, Parquet, and Avro, when 
> files are written/read by Hive. To preserve compatibility with other engines 
> until they upgrade their readers, files will be written using hybrid calendar 
> by default. Default behavior when files do not contain calendar information 
> in their metadata is configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22589) Add storage support for ProlepticCalendar in ORC, Parquet, and Avro

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034715#comment-17034715
 ] 

Hive QA commented on HIVE-22589:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} storage-api in master has 58 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
43s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} ql in master has 1532 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} llap-server in master has 90 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} standalone-metastore/metastore-common: The patch 
generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} common: The patch generated 4 new + 370 unchanged - 0 
fixed = 374 total (was 370) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} serde: The patch generated 2 new + 105 unchanged - 3 
fixed = 107 total (was 108) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
58s{color} | {color:red} ql: The patch generated 27 new + 1565 unchanged - 11 
fixed = 1592 total (was 1576) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} llap-server: The patch generated 3 new + 185 unchanged 
- 1 fixed = 188 total (was 186) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
25s{color} | {color:red} root: The patch generated 37 new + 2232 unchanged - 15 
fixed = 2269 total (was 2247) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
9s{color} | {color:red} ql generated 3 new + 1531 unchanged - 1 fixed = 1534 
total (was 1532) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 18m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 51s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385358=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385358
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:34
Start Date: 11/Feb/20 18:34
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377820258
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/DescTableAnalyzer.java
 ##
 @@ -0,0 +1,210 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.common.TableName;
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.QueryState;
+import org.apache.hadoop.hive.ql.ddl.DDLWork;
+import org.apache.hadoop.hive.ql.ddl.table.info.TableInfoUtils;
+import org.apache.hadoop.hive.ql.ddl.DDLSemanticAnalyzerFactory.DDLType;
+import org.apache.hadoop.hive.ql.ddl.DDLUtils;
+import org.apache.hadoop.hive.ql.exec.Task;
+import org.apache.hadoop.hive.ql.exec.TaskFactory;
+import org.apache.hadoop.hive.ql.hooks.ReadEntity;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.InvalidTableException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.ASTNode;
+import org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer;
+import org.apache.hadoop.hive.ql.parse.HiveParser;
+import org.apache.hadoop.hive.ql.parse.HiveTableName;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+import org.apache.hadoop.hive.ql.session.SessionState;
+
+/**
+ * Analyzer for table describing commands.
+ *
+ * A query like this will generate a tree as follows
+ *   "describe formatted default.maptable partition (b=100) id;"
+ * TOK_TABTYPE
+ *   TOK_TABNAME --> root for tablename, 2 child nodes mean DB specified
+ * default
+ * maptable
+ *   TOK_PARTSPEC  --> root node for partition spec. else columnName
+ * TOK_PARTVAL
+ *   b
+ *   100
+ *   id   --> root node for columnName
+ * formatted
+ */
+@DDLType(type=HiveParser.TOK_DESCTABLE)
+public class DescTableAnalyzer extends BaseSemanticAnalyzer {
+  public DescTableAnalyzer(QueryState queryState) throws SemanticException {
+super(queryState);
+  }
+
+  @Override
+  public void analyzeInternal(ASTNode root) throws SemanticException {
+ctx.setResFile(ctx.getLocalTmpPath());
+
+ASTNode tableTypeExpr = (ASTNode) root.getChild(0);
+
+TableName tableName = getTableName(tableTypeExpr);
+Table table = getTable(tableName);
+
+// process the second child,if exists, node to get partition spec(s)
+Map partitionSpec = getPartitionSpec(db, tableTypeExpr, 
tableName);
+TableInfoUtils.validateTable(db, table, partitionSpec);
+
+// process the third child node,if exists, to get partition spec(s)
+String columnPath = getColumnPath(db, tableTypeExpr, tableName, 
partitionSpec);
+
+boolean showColStats = false;
+boolean isFormatted = false;
+boolean isExt = false;
+if (root.getChildCount() == 2) {
+  int descOptions = root.getChild(1).getType();
+  isFormatted = descOptions == HiveParser.KW_FORMATTED;
+  isExt = descOptions == HiveParser.KW_EXTENDED;
+  // in case of "DESCRIBE FORMATTED tablename column_name" statement, 
colPath will contain tablename.column_name.
+  // If column_name is not specified colPath will be equal to tableName.
+  // This is how we can differentiate if we are describing a table or 
column.
+  if (columnPath != null && isFormatted) {
+showColStats = true;
+  }
+}
+
+inputs.add(new ReadEntity(table));
+
+DescTableDesc desc = new DescTableDesc(ctx.getResFile(), tableName, 
partitionSpec, columnPath, isExt, isFormatted);

[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385357=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385357
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:33
Start Date: 11/Feb/20 18:33
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377820124
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/TableInfoUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.table.partition.PartitionUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Utilities used by table information DDL commands.
+ */
+public final class TableInfoUtils {
+  private TableInfoUtils() {
+throw new UnsupportedOperationException("TableInfoUtils should not be 
instantiated");
+  }
+
+  public static void validateDatabase(Hive db, String databaseName) throws 
SemanticException {
+try {
+  if (!db.databaseExists(databaseName)) {
+throw new 
SemanticException(ErrorMsg.DATABASE_NOT_EXISTS.getMsg(databaseName));
+  }
+} catch (HiveException e) {
+  throw new 
SemanticException(ErrorMsg.DATABASE_NOT_EXISTS.getMsg(databaseName), e);
+}
+  }
+
+  public static void validateTable(Hive db, Table table, Map 
partSpec) throws SemanticException {
+if (partSpec != null) {
 
 Review comment:
   It will be moved to Hive.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385357)
Time Spent: 1.5h  (was: 1h 20m)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385354=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385354
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:33
Start Date: 11/Feb/20 18:33
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377819987
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/TableInfoUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.table.partition.PartitionUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Utilities used by table information DDL commands.
+ */
+public final class TableInfoUtils {
 
 Review comment:
   After some consideration I agree, I'll move these two functions to Hive, and 
get rid of this class.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385354)
Time Spent: 1h 10m  (was: 1h)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385355=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385355
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:33
Start Date: 11/Feb/20 18:33
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377820073
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/TableInfoUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.table.partition.PartitionUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Utilities used by table information DDL commands.
+ */
+public final class TableInfoUtils {
+  private TableInfoUtils() {
+throw new UnsupportedOperationException("TableInfoUtils should not be 
instantiated");
+  }
+
+  public static void validateDatabase(Hive db, String databaseName) throws 
SemanticException {
 
 Review comment:
   It will be moved to Hive.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385355)
Time Spent: 1h 20m  (was: 1h 10m)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385343=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385343
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:27
Start Date: 11/Feb/20 18:27
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377816730
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/TableInfoUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.table.partition.PartitionUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Utilities used by table information DDL commands.
+ */
+public final class TableInfoUtils {
+  private TableInfoUtils() {
+throw new UnsupportedOperationException("TableInfoUtils should not be 
instantiated");
+  }
+
+  public static void validateDatabase(Hive db, String databaseName) throws 
SemanticException {
+try {
+  if (!db.databaseExists(databaseName)) {
+throw new 
SemanticException(ErrorMsg.DATABASE_NOT_EXISTS.getMsg(databaseName));
+  }
+} catch (HiveException e) {
+  throw new 
SemanticException(ErrorMsg.DATABASE_NOT_EXISTS.getMsg(databaseName), e);
+}
+  }
+
+  public static void validateTable(Hive db, Table table, Map 
partSpec) throws SemanticException {
+if (partSpec != null) {
 
 Review comment:
   It validates that the partition exists, if there is a partition specified. 
It throws a SemanticException if the partition doesn't exist.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385343)
Time Spent: 1h  (was: 50m)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22589) Add storage support for ProlepticCalendar in ORC, Parquet, and Avro

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034695#comment-17034695
 ] 

Hive QA commented on HIVE-22589:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12992792/HIVE-22589.07.patch

{color:green}SUCCESS:{color} +1 due to 31 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18009 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver[url_hook] 
(batchId=299)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20561/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20561/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20561/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12992792 - PreCommit-HIVE-Build

> Add storage support for ProlepticCalendar in ORC, Parquet, and Avro
> ---
>
> Key: HIVE-22589
> URL: https://issues.apache.org/jira/browse/HIVE-22589
> Project: Hive
>  Issue Type: Bug
>  Components: Avro, ORC, Parquet
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0, 3.2.0, 3.1.3
>
> Attachments: HIVE-22589.01.patch, HIVE-22589.02.patch, 
> HIVE-22589.03.patch, HIVE-22589.04.patch, HIVE-22589.05.patch, 
> HIVE-22589.06.patch, HIVE-22589.07.patch, HIVE-22589.07.patch, 
> HIVE-22589.07.patch, HIVE-22589.patch, HIVE-22589.patch
>
>
> Hive recently moved its processing to the proleptic calendar, which has 
> created some issues for users who have dates before 1580 AD.
> HIVE-22405 extended the column vectors for times & dates to encode which 
> calendar they are using.
> This issue is to support proleptic calendar in ORC, Parquet, and Avro, when 
> files are written/read by Hive. To preserve compatibility with other engines 
> until they upgrade their readers, files will be written using hybrid calendar 
> by default. Default behavior when files do not contain calendar information 
> in their metadata is configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-16502) Relax hard dependency on SessionState in Authentication classes

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-16502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-16502:
--
Labels: pull-request-available  (was: )

> Relax hard dependency on SessionState in Authentication classes
> ---
>
> Key: HIVE-16502
> URL: https://issues.apache.org/jira/browse/HIVE-16502
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore, Server Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-16502.02.patch, HIVE-16502.1.patch
>
>
> It would be better to have the auth system depend on an interface instead the 
> whole {{SessionState}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-16502) Relax hard dependency on SessionState in Authentication classes

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-16502?focusedWorklogId=385341=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385341
 ]

ASF GitHub Bot logged work on HIVE-16502:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:23
Start Date: 11/Feb/20 18:23
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #904: HIVE-16502 
isessionstate
URL: https://github.com/apache/hive/pull/904
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385341)
Remaining Estimate: 0h
Time Spent: 10m

> Relax hard dependency on SessionState in Authentication classes
> ---
>
> Key: HIVE-16502
> URL: https://issues.apache.org/jira/browse/HIVE-16502
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore, Server Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-16502.02.patch, HIVE-16502.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It would be better to have the auth system depend on an interface instead the 
> whole {{SessionState}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22866) Add more testcases for scheduled queries

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22866?focusedWorklogId=385338=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385338
 ]

ASF GitHub Bot logged work on HIVE-22866:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:22
Start Date: 11/Feb/20 18:22
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #902: HIVE-22866 
schq testcases
URL: https://github.com/apache/hive/pull/902
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385338)
Remaining Estimate: 0h
Time Spent: 10m

> Add more testcases for scheduled queries
> 
>
> Key: HIVE-22866
> URL: https://issues.apache.org/jira/browse/HIVE-22866
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22866.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> the examples in the wiki should be added as test cases



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22866) Add more testcases for scheduled queries

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22866:
--
Labels: pull-request-available  (was: )

> Add more testcases for scheduled queries
> 
>
> Key: HIVE-22866
> URL: https://issues.apache.org/jira/browse/HIVE-22866
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22866.01.patch
>
>
> the examples in the wiki should be added as test cases



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-16355) Service: embedded mode should only be available if service is loaded onto the classpath

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-16355?focusedWorklogId=385339=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385339
 ]

ASF GitHub Bot logged work on HIVE-16355:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:22
Start Date: 11/Feb/20 18:22
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #903: HIVE-16355 
service embedded
URL: https://github.com/apache/hive/pull/903
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385339)
Remaining Estimate: 0h
Time Spent: 10m

> Service: embedded mode should only be available if service is loaded onto the 
> classpath
> ---
>
> Key: HIVE-16355
> URL: https://issues.apache.org/jira/browse/HIVE-16355
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore, Server Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-16355.06.patch, HIVE-16355.1.patch, 
> HIVE-16355.2.patch, HIVE-16355.2.patch, HIVE-16355.3.patch, 
> HIVE-16355.4.patch, HIVE-16355.4.patch, HIVE-16355.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I would like to relax the hard reference to 
> {{EmbeddedThriftBinaryCLIService}} to be only used in case {{service}} module 
> is loaded onto the classpath.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-16355) Service: embedded mode should only be available if service is loaded onto the classpath

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-16355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-16355:
--
Labels: pull-request-available  (was: )

> Service: embedded mode should only be available if service is loaded onto the 
> classpath
> ---
>
> Key: HIVE-16355
> URL: https://issues.apache.org/jira/browse/HIVE-16355
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore, Server Infrastructure
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-16355.06.patch, HIVE-16355.1.patch, 
> HIVE-16355.2.patch, HIVE-16355.2.patch, HIVE-16355.3.patch, 
> HIVE-16355.4.patch, HIVE-16355.4.patch, HIVE-16355.5.patch
>
>
> I would like to relax the hard reference to 
> {{EmbeddedThriftBinaryCLIService}} to be only used in case {{service}} module 
> is loaded onto the classpath.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22873) Make it possible to identify which hs2 instance executed a scheduled query

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22873:
--
Labels: pull-request-available  (was: )

> Make it possible to identify which hs2 instance executed a scheduled query
> --
>
> Key: HIVE-22873
> URL: https://issues.apache.org/jira/browse/HIVE-22873
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22873.01.patch
>
>
> right now only the query_id is shown; in case of multiple hs2 instances the 
> question...users have to resort to grepping the logs for the given query 
> id



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22873) Make it possible to identify which hs2 instance executed a scheduled query

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22873?focusedWorklogId=385335=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385335
 ]

ASF GitHub Bot logged work on HIVE-22873:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:20
Start Date: 11/Feb/20 18:20
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #901: HIVE-22873 
schq ident
URL: https://github.com/apache/hive/pull/901
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385335)
Remaining Estimate: 0h
Time Spent: 10m

> Make it possible to identify which hs2 instance executed a scheduled query
> --
>
> Key: HIVE-22873
> URL: https://issues.apache.org/jira/browse/HIVE-22873
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22873.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> right now only the query_id is shown; in case of multiple hs2 instances the 
> question...users have to resort to grepping the logs for the given query 
> id



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385331=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385331
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:15
Start Date: 11/Feb/20 18:15
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377810208
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java
 ##
 @@ -219,4 +220,21 @@ private static String getHS2Host(HiveConf conf) throws 
SemanticException {
 
 throw new SemanticException("Kill query is only supported in HiveServer2 
(not hive cli)");
   }
+
+  /**
+   * Get the fully qualified name in the node.
+   * E.g. the node of the form ^(DOT ^(DOT a b) c) will generate a name of the 
form "a.b.c".
+   */
+  public static String getFQName(ASTNode node) {
 
 Review comment:
   The original code for this was not removed yet, as it is still used by some 
other analyzers which are still in DDLSemanticAnalyzer, see the static class 
QualifiedNameUtil.  I don't see why wouldn't it work in quoted case, I guess 
the quotes are removed before building the ASTTree, isn't that the case?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385331)
Time Spent: 50m  (was: 40m)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385322=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385322
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 18:09
Start Date: 11/Feb/20 18:09
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #882: 
HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377807346
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/package-info.java
 ##
 @@ -0,0 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/** Show table status DDL operation. */
 
 Review comment:
   I agree, but the our checkstyle.xml tells us to put them there. I'll create 
a jira, and if we can agree to remove it from checkstyle.xml, then I'll remove 
all the package-info.java files from ddl.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385322)
Time Spent: 40m  (was: 0.5h)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22873) Make it possible to identify which hs2 instance executed a scheduled query

2020-02-11 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-22873:

Status: Patch Available  (was: Open)

> Make it possible to identify which hs2 instance executed a scheduled query
> --
>
> Key: HIVE-22873
> URL: https://issues.apache.org/jira/browse/HIVE-22873
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22873.01.patch
>
>
> right now only the query_id is shown; in case of multiple hs2 instances the 
> question...users have to resort to grepping the logs for the given query 
> id



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22873) Make it possible to identify which hs2 instance executed a scheduled query

2020-02-11 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-22873:

Attachment: HIVE-22873.01.patch

> Make it possible to identify which hs2 instance executed a scheduled query
> --
>
> Key: HIVE-22873
> URL: https://issues.apache.org/jira/browse/HIVE-22873
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22873.01.patch
>
>
> right now only the query_id is shown; in case of multiple hs2 instances the 
> question...users have to resort to grepping the logs for the given query 
> id



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22860) Support metadata only replication for external tables

2020-02-11 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22860:
---
Attachment: HIVE-22860.patch
Status: Patch Available  (was: In Progress)

> Support metadata only replication for external tables
> -
>
> Key: HIVE-22860
> URL: https://issues.apache.org/jira/browse/HIVE-22860
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22860.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-22860) Support metadata only replication for external tables

2020-02-11 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22860 started by Aasha Medhi.
--
> Support metadata only replication for external tables
> -
>
> Key: HIVE-22860
> URL: https://issues.apache.org/jira/browse/HIVE-22860
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22860.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22860) Support metadata only replication for external tables

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22860:
--
Labels: pull-request-available  (was: )

> Support metadata only replication for external tables
> -
>
> Key: HIVE-22860
> URL: https://issues.apache.org/jira/browse/HIVE-22860
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22860) Support metadata only replication for external tables

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22860?focusedWorklogId=385313=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385313
 ]

ASF GitHub Bot logged work on HIVE-22860:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 17:52
Start Date: 11/Feb/20 17:52
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #900: HIVE-22860 
Support metadata only replication for external tables
URL: https://github.com/apache/hive/pull/900
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385313)
Remaining Estimate: 0h
Time Spent: 10m

> Support metadata only replication for external tables
> -
>
> Key: HIVE-22860
> URL: https://issues.apache.org/jira/browse/HIVE-22860
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22728) Limit the scope of uniqueness of constraint name to database

2020-02-11 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22728:
--
Attachment: HIVE-22728.02.patch

> Limit the scope of uniqueness of constraint name to database
> 
>
> Key: HIVE-22728
> URL: https://issues.apache.org/jira/browse/HIVE-22728
> Project: Hive
>  Issue Type: Wish
>Reporter: Jesus Camacho Rodriguez
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-22728.01.patch, HIVE-22728.02.patch
>
>
> Currently, constraint names are globally unique across all databases 
> (assumption is that this may have done by design). Nevertheless, though 
> behavior seems to be implementation specific, it would be interesting to 
> limit the scope to uniqueness per database.
> Currently we do not store database information with the constraints. To 
> change the scope to one db, we would need to store the DB_ID in the 
> KEY_CONSTRAINTS table in metastore when we create a constraint and add the 
> DB_ID to the PRIMARY KEY of that table. Some minor changes to the error 
> messages would be needed too, since otherwise it would be difficult to 
> identify the correct violation in queries that span across multiple 
> databases. Additionally, the SQL scripts will need to be updated to populate 
> the DB_ID when we upgrade to new version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22824) JoinProjectTranspose rule should skip Projects containing windowing expression

2020-02-11 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22824:
---
Attachment: HIVE-22824.4.patch

> JoinProjectTranspose rule should skip Projects containing windowing expression
> --
>
> Key: HIVE-22824
> URL: https://issues.apache.org/jira/browse/HIVE-22824
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22824.1.patch, HIVE-22824.2.patch, 
> HIVE-22824.3.patch, HIVE-22824.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Otherwise this rule could end up creating plan with windowing expression 
> within join condition which hive doesn't know how to process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22824) JoinProjectTranspose rule should skip Projects containing windowing expression

2020-02-11 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22824:
---
Status: Open  (was: Patch Available)

> JoinProjectTranspose rule should skip Projects containing windowing expression
> --
>
> Key: HIVE-22824
> URL: https://issues.apache.org/jira/browse/HIVE-22824
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22824.1.patch, HIVE-22824.2.patch, 
> HIVE-22824.3.patch, HIVE-22824.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Otherwise this rule could end up creating plan with windowing expression 
> within join condition which hive doesn't know how to process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22824) JoinProjectTranspose rule should skip Projects containing windowing expression

2020-02-11 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-22824:
---
Status: Patch Available  (was: Open)

> JoinProjectTranspose rule should skip Projects containing windowing expression
> --
>
> Key: HIVE-22824
> URL: https://issues.apache.org/jira/browse/HIVE-22824
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22824.1.patch, HIVE-22824.2.patch, 
> HIVE-22824.3.patch, HIVE-22824.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Otherwise this rule could end up creating plan with windowing expression 
> within join condition which hive doesn't know how to process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22850) Optimise lock acquisition in TxnHandler

2020-02-11 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034601#comment-17034601
 ] 

Hive QA commented on HIVE-22850:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12993100/HIVE-22850.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17990 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20560/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20560/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20560/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12993100 - PreCommit-HIVE-Build

> Optimise lock acquisition in TxnHandler
> ---
>
> Key: HIVE-22850
> URL: https://issues.apache.org/jira/browse/HIVE-22850
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Rajesh Balamohan
>Priority: Major
> Attachments: HIVE-22850.1.patch, HIVE-22850.2.patch, 
> HIVE-22850.3.patch, Screenshot 2020-02-07 at 4.14.51 AM.jpg, jumpTableInfo.png
>
>
> With concurrent queries, time taken for lock acquisition increases 
> substantially. As part of lock acquisition in the query, 
> {{TxnHandler::checkLock}} gets invoked. This involves getting a mutex and 
> compare the locks being requested for, with that of existing locks in 
> {{HIVE_LOCKS}} table.
> With concurrent queries, time taken to do this check increase and this 
> significantly increases the time taken for getting mutex for other threads 
> (due to select for update). In a synthetic workload, it was in the order of 
> 10+ seconds. This codepath can be optimized when all lock requests are 
> SHARED_READ.
>  
>  
> !Screenshot 2020-02-07 at 4.14.51 AM.jpg|width=743,height=348!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-22875) Refactor query creation in QueryCompactor implementations

2020-02-11 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22875 started by Karen Coppage.

> Refactor query creation in QueryCompactor implementations
> -
>
> Key: HIVE-22875
> URL: https://issues.apache.org/jira/browse/HIVE-22875
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>
> There is a lot of repetition where creation/compaction/drop queries are 
> created in MajorQueryCompactor, MinorQueryCompactor, MmMajorQueryCompactor 
> and MmMinorQueryCompactor.
> Initial idea is to create a CompactionQueryBuilder that all 4 implementations 
> would use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22864) Add option to DatabaseRule to run the Schema Tool in verbose mode for tests

2020-02-11 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22864:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add option to DatabaseRule to run the Schema Tool in verbose mode for tests
> ---
>
> Key: HIVE-22864
> URL: https://issues.apache.org/jira/browse/HIVE-22864
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22864.01.patch
>
>
> Running the database schema tests in the metastore is always in non-verbose 
> mode, which can be helpful in case there is an error. Let's introduce a new 
> maven argument for the tests (-Dverbose.schematool) with which the output of 
> the Schema Tool is in verbose mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22864) Add option to DatabaseRule to run the Schema Tool in verbose mode for tests

2020-02-11 Thread Miklos Gergely (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034590#comment-17034590
 ] 

Miklos Gergely commented on HIVE-22864:
---

Merged to master, thanks [~abstractdog]!

> Add option to DatabaseRule to run the Schema Tool in verbose mode for tests
> ---
>
> Key: HIVE-22864
> URL: https://issues.apache.org/jira/browse/HIVE-22864
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22864.01.patch
>
>
> Running the database schema tests in the metastore is always in non-verbose 
> mode, which can be helpful in case there is an error. Let's introduce a new 
> maven argument for the tests (-Dverbose.schematool) with which the output of 
> the Schema Tool is in verbose mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22875) Refactor query creation in QueryCompactor implementations

2020-02-11 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage reassigned HIVE-22875:



> Refactor query creation in QueryCompactor implementations
> -
>
> Key: HIVE-22875
> URL: https://issues.apache.org/jira/browse/HIVE-22875
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>
> There is a lot of repetition where creation/compaction/drop queries are 
> created in MajorQueryCompactor, MinorQueryCompactor, MmMajorQueryCompactor 
> and MmMinorQueryCompactor.
> Initial idea is to create a CompactionQueryBuilder that all 4 implementations 
> would use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22874) Beeline unable to use credentials from URL.

2020-02-11 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam reassigned HIVE-22874:



> Beeline unable to use credentials from URL.
> ---
>
> Key: HIVE-22874
> URL: https://issues.apache.org/jira/browse/HIVE-22874
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Minor
> Fix For: 4.0.0
>
>
> Beeline is not using password value from the URL. 
> Using LDAP Auth in this case, so the failure is on connect.
> bin/beeline -u 
> "jdbc:hive2://localhost:1/default;user=test1;password=test1" 
> On the server side in LdapAuthenticator, the principals come out to (via a 
> special debug logging)
> 2020-02-11T11:10:31,613  INFO [HiveServer2-Handler-Pool: Thread-67] 
> auth.LdapAuthenticationProviderImpl: Connecting to ldap as 
> user/password:test1:anonymous
> This bug may have been introduced via
> https://github.com/apache/hive/commit/749e831060381a8ae4775630efb72d5cd040652f
> pass = "" ( an empty string on this line) 
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L848
> but on this line of code, it checks to see it is null which will not be true 
> and hence it never picks up from the jdbc url
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L900
> It has another chance here but pass != null will always be true and never 
> goes into the else condition.
> https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L909



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22841) ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner IllegalArgumentException on invalid cookie signature

2020-02-11 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034576#comment-17034576
 ] 

Kevin Risden commented on HIVE-22841:
-

Uploaded new patch fixing the two checkstyle issues [^HIVE-22841.2.patch] 

> ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner 
> IllegalArgumentException on invalid cookie signature
> -
>
> Key: HIVE-22841
> URL: https://issues.apache.org/jira/browse/HIVE-22841
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: HIVE-22841.1.patch, HIVE-22841.2.patch
>
>
> Currently CookieSigner throws an IllegalArgumentException if the cookie 
> signature is invalid. 
> {code:java}
> if (!MessageDigest.isEqual(originalSignature.getBytes(), 
> currentSignature.getBytes())) {
>   throw new IllegalArgumentException("Invalid sign, original = " + 
> originalSignature +
> " current = " + currentSignature);
> }
> {code}
> CookieSigner is only used in the ThriftHttpServlet#getClientNameFromCookie 
> and doesn't handle the IllegalArgumentException. It is only checking if the 
> value from the cookie is null or not.
> https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java#L295
> {code:java}
>   currValue = signer.verifyAndExtract(currValue);
>   // Retrieve the user name, do the final validation step.
>   if (currValue != null) {
> {code}
> This should be fixed to either:
> a) Have CookieSigner not return an IllegalArgumentException
> b) Improve ThriftHttpServlet to handle CookieSigner throwing an 
> IllegalArgumentException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22841) ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner IllegalArgumentException on invalid cookie signature

2020-02-11 Thread Kevin Risden (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated HIVE-22841:

Attachment: HIVE-22841.2.patch

> ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner 
> IllegalArgumentException on invalid cookie signature
> -
>
> Key: HIVE-22841
> URL: https://issues.apache.org/jira/browse/HIVE-22841
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: HIVE-22841.1.patch, HIVE-22841.2.patch
>
>
> Currently CookieSigner throws an IllegalArgumentException if the cookie 
> signature is invalid. 
> {code:java}
> if (!MessageDigest.isEqual(originalSignature.getBytes(), 
> currentSignature.getBytes())) {
>   throw new IllegalArgumentException("Invalid sign, original = " + 
> originalSignature +
> " current = " + currentSignature);
> }
> {code}
> CookieSigner is only used in the ThriftHttpServlet#getClientNameFromCookie 
> and doesn't handle the IllegalArgumentException. It is only checking if the 
> value from the cookie is null or not.
> https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java#L295
> {code:java}
>   currValue = signer.verifyAndExtract(currValue);
>   // Retrieve the user name, do the final validation step.
>   if (currValue != null) {
> {code}
> This should be fixed to either:
> a) Have CookieSigner not return an IllegalArgumentException
> b) Improve ThriftHttpServlet to handle CookieSigner throwing an 
> IllegalArgumentException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22873) Make it possible to identify which hs2 instance executed a scheduled query

2020-02-11 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-22873:
---


> Make it possible to identify which hs2 instance executed a scheduled query
> --
>
> Key: HIVE-22873
> URL: https://issues.apache.org/jira/browse/HIVE-22873
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> right now only the query_id is shown; in case of multiple hs2 instances the 
> question...users have to resort to grepping the logs for the given query 
> id



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22872) Support multiple executors for scheduled queries

2020-02-11 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-22872:
---


> Support multiple executors for scheduled queries
> 
>
> Key: HIVE-22872
> URL: https://issues.apache.org/jira/browse/HIVE-22872
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22850) Optimise lock acquisition in TxnHandler

2020-02-11 Thread Zoltan Chovan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034572#comment-17034572
 ] 

Zoltan Chovan commented on HIVE-22850:
--

[~rajesh.balamohan] I think Peter was referring to this part:



 
{code:java}
StringBuilder query = new StringBuilder("select count(*) from " 
+ "\"HIVE_LOCKS\" where \"HL_DB\" in ("); 
boolean first = true; 
for (String s : dbs) { 
if (first) first = false; 
else query.append(", "); 
query.append('\''); 
query.append(s); 
query.append('\''); 
}
{code}
 

 

Oracle db has a limitation on how many elements can be present in and IN() 
clause, if it's over 1000 the query will fail. In order to avoid that you might 
consider some kind of batching for this part.

> Optimise lock acquisition in TxnHandler
> ---
>
> Key: HIVE-22850
> URL: https://issues.apache.org/jira/browse/HIVE-22850
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Rajesh Balamohan
>Priority: Major
> Attachments: HIVE-22850.1.patch, HIVE-22850.2.patch, 
> HIVE-22850.3.patch, Screenshot 2020-02-07 at 4.14.51 AM.jpg, jumpTableInfo.png
>
>
> With concurrent queries, time taken for lock acquisition increases 
> substantially. As part of lock acquisition in the query, 
> {{TxnHandler::checkLock}} gets invoked. This involves getting a mutex and 
> compare the locks being requested for, with that of existing locks in 
> {{HIVE_LOCKS}} table.
> With concurrent queries, time taken to do this check increase and this 
> significantly increases the time taken for getting mutex for other threads 
> (due to select for update). In a synthetic workload, it was in the order of 
> 10+ seconds. This codepath can be optimized when all lock requests are 
> SHARED_READ.
>  
>  
> !Screenshot 2020-02-07 at 4.14.51 AM.jpg|width=743,height=348!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22589) Add storage support for ProlepticCalendar in ORC, Parquet, and Avro

2020-02-11 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17034561#comment-17034561
 ] 

Jesus Camacho Rodriguez commented on HIVE-22589:


[~abstractdog], I do not see the comments, did you leave them in the PR?

> Add storage support for ProlepticCalendar in ORC, Parquet, and Avro
> ---
>
> Key: HIVE-22589
> URL: https://issues.apache.org/jira/browse/HIVE-22589
> Project: Hive
>  Issue Type: Bug
>  Components: Avro, ORC, Parquet
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0, 3.2.0, 3.1.3
>
> Attachments: HIVE-22589.01.patch, HIVE-22589.02.patch, 
> HIVE-22589.03.patch, HIVE-22589.04.patch, HIVE-22589.05.patch, 
> HIVE-22589.06.patch, HIVE-22589.07.patch, HIVE-22589.07.patch, 
> HIVE-22589.07.patch, HIVE-22589.patch, HIVE-22589.patch
>
>
> Hive recently moved its processing to the proleptic calendar, which has 
> created some issues for users who have dates before 1580 AD.
> HIVE-22405 extended the column vectors for times & dates to encode which 
> calendar they are using.
> This issue is to support proleptic calendar in ORC, Parquet, and Avro, when 
> files are written/read by Hive. To preserve compatibility with other engines 
> until they upgrade their readers, files will be written using hybrid calendar 
> by default. Default behavior when files do not contain calendar information 
> in their metadata is configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22747) Break up DDLSemanticAnalyzer - extract Table info and lock analyzers

2020-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22747?focusedWorklogId=385242=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-385242
 ]

ASF GitHub Bot logged work on HIVE-22747:
-

Author: ASF GitHub Bot
Created on: 11/Feb/20 15:54
Start Date: 11/Feb/20 15:54
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #882: HIVE-22747 
Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
URL: https://github.com/apache/hive/pull/882#discussion_r377724650
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/TableInfoUtils.java
 ##
 @@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info;
+
+import java.util.Map;
+
+import org.apache.hadoop.hive.ql.ErrorMsg;
+import org.apache.hadoop.hive.ql.ddl.table.partition.PartitionUtils;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.parse.SemanticException;
+
+/**
+ * Utilities used by table information DDL commands.
+ */
+public final class TableInfoUtils {
+  private TableInfoUtils() {
+throw new UnsupportedOperationException("TableInfoUtils should not be 
instantiated");
+  }
+
+  public static void validateDatabase(Hive db, String databaseName) throws 
SemanticException {
 
 Review comment:
   the methods in this class should be in the "Hive" class; because the first 
argument of the methods are an instance of Hive
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 385242)
Time Spent: 20m  (was: 10m)

> Break up DDLSemanticAnalyzer - extract Table info and lock analyzers
> 
>
> Key: HIVE-22747
> URL: https://issues.apache.org/jira/browse/HIVE-22747
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22747.01.patch, HIVE-22747.02.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #13: extract the table info and lock related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >