[jira] [Updated] (HIVE-20168) ReduceSinkOperator Logging Hidden

2018-07-23 Thread Bharathkrishna Guruvayoor Murali (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-20168:

Attachment: HIVE-20168.2.patch

> ReduceSinkOperator Logging Hidden
> -
>
> Key: HIVE-20168
> URL: https://issues.apache.org/jira/browse/HIVE-20168
> Project: Hive
>  Issue Type: Bug
>  Components: Operators
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20168.1.patch, HIVE-20168.2.patch
>
>
> [https://github.com/apache/hive/blob/ac6b2a3fb195916e22b2e5f465add2ffbcdc7430/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java]
>  
> {code:java}
> if (LOG.isTraceEnabled()) {
>   if (numRows == cntr) {
> cntr = logEveryNRows == 0 ? cntr * 10 : numRows + logEveryNRows;
> if (cntr < 0 || numRows < 0) {
>   cntr = 0;
>   numRows = 1;
> }
> LOG.info(toString() + ": records written - " + numRows);
>   }
> }
> ...
> if (LOG.isTraceEnabled()) {
>   LOG.info(toString() + ": records written - " + numRows);
> }
> {code}
> There are logging guards here checking for TRACE level debugging but the 
> logging is actually INFO.  This is important logging for detecting data skew. 
>  Please change guards to check for INFO... or I would prefer that the guards 
> are removed altogether since it's very rare that a service is running with 
> only WARN level logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18852) Misleading error message in alter table validation

2018-07-23 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553406#comment-16553406
 ] 

Andrew Sherman commented on HIVE-18852:
---

[~vgarg] thanks for the +1, can you push this to master if possible? Thanks.

> Misleading error message in alter table validation
> --
>
> Key: HIVE-18852
> URL: https://issues.apache.org/jira/browse/HIVE-18852
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 2.4.0
>Reporter: Dan Burkert
>Assignee: Andrew Sherman
>Priority: Major
> Attachments: HIVE-18852.1.patch
>
>
> The metastore's validation error message when attempting to rename a table to 
> a non-existent database is wrong.  For instance, attempting to alter table 
> 'db.table' to 'non_existent_database.table' results in the Thrift error:
> {{TException - service has thrown: InvalidOperationException(message=Unable 
> to change partition or table. Database db does not exist Check metastore logs 
> for detailed stack.non_existent_database)}}
> I believe the offending line of code is 
> [here|https://github.com/apache/hive/blob/branch-2/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java?utf8=%E2%9C%93#L331-L333],
>  notice that {{dbname}} is used in the message, not {{newDbName}}.  I don't 
> know if switching that would cause the case of a non-existing {{dbname}} case 
> to regress, though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19891) inserting into external tables with custom partition directories may cause data loss

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553400#comment-16553400
 ] 

Hive QA commented on HIVE-19891:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932758/HIVE-19891.07.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14683 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12802/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12802/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12802/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12932758 - PreCommit-HIVE-Build

> inserting into external tables with custom partition directories may cause 
> data loss
> 
>
> Key: HIVE-19891
> URL: https://issues.apache.org/jira/browse/HIVE-19891
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19891.01.patch, HIVE-19891.02.patch, 
> HIVE-19891.03.patch, HIVE-19891.04.patch, HIVE-19891.05.patch, 
> HIVE-19891.06.patch, HIVE-19891.07.patch, HIVE-19891.patch
>
>
> tbl1 is just used as a prop to create data, could be an existing directory 
> for an external table.
> Due to weird behavior of LoadTableDesc (some ancient code for overriding old 
> partition path), custom partition path is overwritten after the query and the 
> data in it ceases being a part of the table (can be seen in desc formatted 
> output with masking commented out in QTestUtil)
> This affects branch-1 too, so it's pretty old.
> {noformat}drop table tbl1;
> CREATE TABLE tbl1 (index int, value int ) PARTITIONED BY ( created_date 
> string );
> insert into tbl1 partition(created_date='2018-02-01') VALUES (2, 2);
> CREATE external TABLE tbl2 (index int, value int ) PARTITIONED BY ( 
> created_date string );
> ALTER TABLE tbl2 ADD PARTITION(created_date='2018-02-01');
> ALTER TABLE tbl2 PARTITION(created_date='2018-02-01') SET LOCATION 
> 'file:/Users/sergey/git/hivegit/itests/qtest/target/warehouse/tbl1/created_date=2018-02-01';
> select * from tbl2;
> describe formatted tbl2 partition(created_date='2018-02-01');
> insert into tbl2 partition(created_date='2018-02-01') VALUES (1, 1);
> select * from tbl2;
> describe formatted tbl2 partition(created_date='2018-02-01');
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19715) Consolidated and flexible API for fetching partition metadata from HMS

2018-07-23 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553396#comment-16553396
 ] 

Vihang Karajgaonkar commented on HIVE-19715:


Attached the first version of the design proposal for the new API.

TLDR
The API reuses existing {{PartitionSpec}} objects and makes some of the fields 
in PartitionSpec as optional. It also supports the following:
1. Projection list which is a list of string of dot separated field names. So 
example, clients who are interested only in partition locations can request 
{{sd.location}} and the result will only include the locations instead of the 
full partition objects.
2. FilterSpec which is provides different ways to filter the partitions for a 
given table. The current supports {{BY_NAMES}}, {{BY_VALUES}} or {{BY_EXPR}}. 
Although its not clear if there is value is providing {{BY_VALUES}} filters.
3. Pagination: API response contains a Pagination token which can used by the 
clients to send subsequent requests to retrieve configurable batches of 
partitions. The pagination token itself is a {{byte[]}} which client doesn't 
need to interpret. Internally server can send some values to in the token like 
last {{PART_ID}} sent previously, table modification stamp etc.

Any thoughts or suggestions?

cc: [~alangates] [~thejas] [~tlipcon] [~akolb]

> Consolidated and flexible API for fetching partition metadata from HMS
> --
>
> Key: HIVE-19715
> URL: https://issues.apache.org/jira/browse/HIVE-19715
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Todd Lipcon
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-19715-design-doc.pdf
>
>
> Currently, the HMS thrift API exposes 17 different APIs for fetching 
> partition-related information. There is somewhat of a combinatorial explosion 
> going on, where each API has variants with and without "auth" info, by pspecs 
> vs names, by filters, by exprs, etc. Having all of these separate APIs long 
> term is a maintenance burden and also more confusing for consumers.
> Additionally, even with all of these APIs, there is a lack of granularity in 
> fetching only the information needed for a particular use case. For example, 
> in some use cases it may be beneficial to only fetch the partition locations 
> without wasting effort fetching statistics, etc.
> This JIRA proposes that we add a new "one API to rule them all" for fetching 
> partition info. The request and response would be encapsulated in structs. 
> Some desirable properties:
> - the request should be able to specify which pieces of information are 
> required (eg location, properties, etc)
> - in the case of partition parameters, the request should be able to do 
> either whitelisting or blacklisting (eg to exclude large incremental column 
> stats HLL dumped in there by Impala)
> - the request should optionally specify auth info (to encompas the 
> "with_auth" variants)
> - the request should be able to designate the set of partitions to access 
> through one of several different methods (eg "all", list, expr, 
> part_vals, etc) 
> - the struct should be easily evolvable so that new pieces of info can be 
> added
> - the response should be designed in such a way as to avoid transferring 
> redundant information for common cases (eg simple "dictionary coding" of 
> strings like parameter names, etc)
> - the API should support some form of pagination for tables with large 
> partition counts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19715) Consolidated and flexible API for fetching partition metadata from HMS

2018-07-23 Thread Vihang Karajgaonkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-19715:
---
Attachment: HIVE-19715-design-doc.pdf

> Consolidated and flexible API for fetching partition metadata from HMS
> --
>
> Key: HIVE-19715
> URL: https://issues.apache.org/jira/browse/HIVE-19715
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore
>Reporter: Todd Lipcon
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-19715-design-doc.pdf
>
>
> Currently, the HMS thrift API exposes 17 different APIs for fetching 
> partition-related information. There is somewhat of a combinatorial explosion 
> going on, where each API has variants with and without "auth" info, by pspecs 
> vs names, by filters, by exprs, etc. Having all of these separate APIs long 
> term is a maintenance burden and also more confusing for consumers.
> Additionally, even with all of these APIs, there is a lack of granularity in 
> fetching only the information needed for a particular use case. For example, 
> in some use cases it may be beneficial to only fetch the partition locations 
> without wasting effort fetching statistics, etc.
> This JIRA proposes that we add a new "one API to rule them all" for fetching 
> partition info. The request and response would be encapsulated in structs. 
> Some desirable properties:
> - the request should be able to specify which pieces of information are 
> required (eg location, properties, etc)
> - in the case of partition parameters, the request should be able to do 
> either whitelisting or blacklisting (eg to exclude large incremental column 
> stats HLL dumped in there by Impala)
> - the request should optionally specify auth info (to encompas the 
> "with_auth" variants)
> - the request should be able to designate the set of partitions to access 
> through one of several different methods (eg "all", list, expr, 
> part_vals, etc) 
> - the struct should be easily evolvable so that new pieces of info can be 
> added
> - the response should be designed in such a way as to avoid transferring 
> redundant information for common cases (eg simple "dictionary coding" of 
> strings like parameter names, etc)
> - the API should support some form of pagination for tables with large 
> partition counts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20168) ReduceSinkOperator Logging Hidden

2018-07-23 Thread Bharathkrishna Guruvayoor Murali (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-20168:

Status: Patch Available  (was: Open)

> ReduceSinkOperator Logging Hidden
> -
>
> Key: HIVE-20168
> URL: https://issues.apache.org/jira/browse/HIVE-20168
> Project: Hive
>  Issue Type: Bug
>  Components: Operators
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20168.1.patch
>
>
> [https://github.com/apache/hive/blob/ac6b2a3fb195916e22b2e5f465add2ffbcdc7430/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java]
>  
> {code:java}
> if (LOG.isTraceEnabled()) {
>   if (numRows == cntr) {
> cntr = logEveryNRows == 0 ? cntr * 10 : numRows + logEveryNRows;
> if (cntr < 0 || numRows < 0) {
>   cntr = 0;
>   numRows = 1;
> }
> LOG.info(toString() + ": records written - " + numRows);
>   }
> }
> ...
> if (LOG.isTraceEnabled()) {
>   LOG.info(toString() + ": records written - " + numRows);
> }
> {code}
> There are logging guards here checking for TRACE level debugging but the 
> logging is actually INFO.  This is important logging for detecting data skew. 
>  Please change guards to check for INFO... or I would prefer that the guards 
> are removed altogether since it's very rare that a service is running with 
> only WARN level logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20168) ReduceSinkOperator Logging Hidden

2018-07-23 Thread Bharathkrishna Guruvayoor Murali (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-20168:

Attachment: HIVE-20168.1.patch

> ReduceSinkOperator Logging Hidden
> -
>
> Key: HIVE-20168
> URL: https://issues.apache.org/jira/browse/HIVE-20168
> Project: Hive
>  Issue Type: Bug
>  Components: Operators
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20168.1.patch
>
>
> [https://github.com/apache/hive/blob/ac6b2a3fb195916e22b2e5f465add2ffbcdc7430/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java]
>  
> {code:java}
> if (LOG.isTraceEnabled()) {
>   if (numRows == cntr) {
> cntr = logEveryNRows == 0 ? cntr * 10 : numRows + logEveryNRows;
> if (cntr < 0 || numRows < 0) {
>   cntr = 0;
>   numRows = 1;
> }
> LOG.info(toString() + ": records written - " + numRows);
>   }
> }
> ...
> if (LOG.isTraceEnabled()) {
>   LOG.info(toString() + ": records written - " + numRows);
> }
> {code}
> There are logging guards here checking for TRACE level debugging but the 
> logging is actually INFO.  This is important logging for detecting data skew. 
>  Please change guards to check for INFO... or I would prefer that the guards 
> are removed altogether since it's very rare that a service is running with 
> only WARN level logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19891) inserting into external tables with custom partition directories may cause data loss

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553364#comment-16553364
 ] 

Hive QA commented on HIVE-19891:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 3 new + 497 unchanged - 0 
fixed = 500 total (was 497) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12802/dev-support/hive-personality.sh
 |
| git revision | master / 5e7aa09 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12802/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12802/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> inserting into external tables with custom partition directories may cause 
> data loss
> 
>
> Key: HIVE-19891
> URL: https://issues.apache.org/jira/browse/HIVE-19891
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19891.01.patch, HIVE-19891.02.patch, 
> HIVE-19891.03.patch, HIVE-19891.04.patch, HIVE-19891.05.patch, 
> HIVE-19891.06.patch, HIVE-19891.07.patch, HIVE-19891.patch
>
>
> tbl1 is just used as a prop to create data, could be an existing directory 
> for an external table.
> Due to weird behavior of LoadTableDesc (some ancient code for overriding old 
> partition path), custom partition path is overwritten after the query and the 
> data in it ceases being a part of the table (can be seen in desc formatted 
> output with masking commented out in QTestUtil)
> This affects branch-1 too, so it's pretty old.
> {noformat}drop table tbl1;
> CREATE TABLE tbl1 (index int, value int ) PARTITIONED BY ( created_date 
> string );
> insert into tbl1 partition(created_date='2018-02-01') VALUES (2, 2);
> CREATE external TABLE tbl2 (index int, value int ) PARTITIONED BY ( 
> created_date string );
> ALTER TABLE tbl2 ADD PARTITION(created_date='2018-02-01');
> ALTER TABLE tbl2 PARTITION(created_date='2018-02-01') SET LOCATION 
> 

[jira] [Commented] (HIVE-20201) Hive shouldn't use HBase's Base64 implementation

2018-07-23 Thread Naveen Gangam (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553363#comment-16553363
 ] 

Naveen Gangam commented on HIVE-20201:
--

[~vgarg] Thanks. Pushed to branch-3. I have updated the Fixed versions to 
include 3.2. Please correct if it is wrong. Thanks

> Hive shouldn't use HBase's Base64 implementation
> 
>
> Key: HIVE-20201
> URL: https://issues.apache.org/jira/browse/HIVE-20201
> Project: Hive
>  Issue Type: Task
>  Components: HBase Handler
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20201.0.patch
>
>
> HBase is removing their Base64 implementation because it never should have 
> been public, so Hive should switch to a different provider. Hive already uses 
> Commons-Codec Base64 in other places, so that would be a natural replacement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20201) Hive shouldn't use HBase's Base64 implementation

2018-07-23 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-20201:
-
Fix Version/s: 3.2.0

> Hive shouldn't use HBase's Base64 implementation
> 
>
> Key: HIVE-20201
> URL: https://issues.apache.org/jira/browse/HIVE-20201
> Project: Hive
>  Issue Type: Task
>  Components: HBase Handler
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20201.0.patch
>
>
> HBase is removing their Base64 implementation because it never should have 
> been public, so Hive should switch to a different provider. Hive already uses 
> Commons-Codec Base64 in other places, so that would be a natural replacement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20221) Increase column width for partition_params

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553328#comment-16553328
 ] 

Hive QA commented on HIVE-20221:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932756/HIVE-20221.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14682 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12801/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12801/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12801/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12932756 - PreCommit-HIVE-Build

> Increase column width for partition_params
> --
>
> Key: HIVE-20221
> URL: https://issues.apache.org/jira/browse/HIVE-20221
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20221.01.patch, HIVE-20221.01.patch
>
>
> HIVE-12274 have addressed almost all metastore columns; however it have left 
> out PARTITION_PARAMS; so in case of a partitioned tables the limits are still 
> there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16882) Improvements For Avro SerDe Package

2018-07-23 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-16882:
---
Status: Open  (was: Patch Available)

> Improvements For Avro SerDe Package
> ---
>
> Key: HIVE-16882
> URL: https://issues.apache.org/jira/browse/HIVE-16882
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, 
> HIVE-16882.3.patch, HIVE-16882.4.patch, HIVE-16882.5.patch, 
> HIVE-16882.6.patch, HIVE-16882.7.patch, HIVE-16882.8.patch, HIVE-16882.9.patch
>
>
> # Use SLF4J parameter DEBUG logging
> # Use re-usable libraries where appropriate
> # Use enhanced for loops where appropriate
> # Fix several minor check-style error
> # Small performance enhancements in InstanceCache



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16882) Improvements For Avro SerDe Package

2018-07-23 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-16882:
---
Attachment: HIVE-16882.9.patch

> Improvements For Avro SerDe Package
> ---
>
> Key: HIVE-16882
> URL: https://issues.apache.org/jira/browse/HIVE-16882
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, 
> HIVE-16882.3.patch, HIVE-16882.4.patch, HIVE-16882.5.patch, 
> HIVE-16882.6.patch, HIVE-16882.7.patch, HIVE-16882.8.patch, HIVE-16882.9.patch
>
>
> # Use SLF4J parameter DEBUG logging
> # Use re-usable libraries where appropriate
> # Use enhanced for loops where appropriate
> # Fix several minor check-style error
> # Small performance enhancements in InstanceCache



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16882) Improvements For Avro SerDe Package

2018-07-23 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-16882:
---
Status: Patch Available  (was: Open)

> Improvements For Avro SerDe Package
> ---
>
> Key: HIVE-16882
> URL: https://issues.apache.org/jira/browse/HIVE-16882
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, 
> HIVE-16882.3.patch, HIVE-16882.4.patch, HIVE-16882.5.patch, 
> HIVE-16882.6.patch, HIVE-16882.7.patch, HIVE-16882.8.patch, HIVE-16882.9.patch
>
>
> # Use SLF4J parameter DEBUG logging
> # Use re-usable libraries where appropriate
> # Use enhanced for loops where appropriate
> # Fix several minor check-style error
> # Small performance enhancements in InstanceCache



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17683) Add explain locks command

2018-07-23 Thread Igor Kryvenko (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553283#comment-16553283
 ] 

Igor Kryvenko commented on HIVE-17683:
--

[~ekoifman] Yeah, of course, I will attach it. Thanks for the review.

> Add explain locks  command
> ---
>
> Key: HIVE-17683
> URL: https://issues.apache.org/jira/browse/HIVE-17683
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Critical
> Attachments: HIVE-17683.01.patch, HIVE-17683.02.patch, 
> HIVE-17683.03.patch, HIVE-17683.04.patch, HIVE-17683.05.patch, 
> HIVE-17683.06.patch
>
>
> Explore if it's possible to add info about what locks will be asked for to 
> the query plan.
> Lock acquisition (for Acid Lock Manager) is done in 
> DbTxnManager.acquireLocks() which is called once the query starts running.  
> Would need to refactor that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19937) Intern fields in MapWork on deserialization

2018-07-23 Thread Misha Dmitriev (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553279#comment-16553279
 ] 

Misha Dmitriev commented on HIVE-19937:
---

The last patch looks good to me.

The only slight concern that I got from looking at it once again is the 
following: in one or two places you switched from passing around Strings to 
passing around Paths, and subsequently switched some  HashMap(s) to HashMap. Note that a lookup in a map where keys 
are complex objects is slower, because Path.equals(Path) is slower than 
String.equals(String) - it may involve comparison of many strings, etc. I 
haven't seen any reports on Hive CPU performance problems, and I hope this code 
is not on a critical path, and/or that GC-related savings would offset the 
potential hashmap lookup slowdown... but anyway, I guess it's worth remembering 
about this.

> Intern fields in MapWork on deserialization
> ---
>
> Key: HIVE-19937
> URL: https://issues.apache.org/jira/browse/HIVE-19937
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-19937.1.patch, HIVE-19937.2.patch, 
> HIVE-19937.3.patch, HIVE-19937.4.patch, HIVE-19937.5.patch, 
> post-patch-report.html, report.html
>
>
> When fixing HIVE-16395, we decided that each new Spark task should clone the 
> {{JobConf}} object to prevent any {{ConcurrentModificationException}} from 
> being thrown. However, setting this variable comes at a cost of storing a 
> duplicate {{JobConf}} object for each Spark task. These objects can take up a 
> significant amount of memory, we should intern them so that Spark tasks 
> running in the same JVM don't store duplicate copies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20221) Increase column width for partition_params

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553277#comment-16553277
 ] 

Hive QA commented on HIVE-20221:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} metastore-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12801/dev-support/hive-personality.sh
 |
| git revision | master / 5e7aa09 |
| Default Java | 1.8.0_111 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12801/yetus/branch-findbugs-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12801/yetus/patch-findbugs-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12801/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Increase column width for partition_params
> --
>
> Key: HIVE-20221
> URL: https://issues.apache.org/jira/browse/HIVE-20221
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20221.01.patch, HIVE-20221.01.patch
>
>
> HIVE-12274 have addressed almost all metastore columns; however it have left 
> out PARTITION_PARAMS; so in case of a partitioned tables the limits are still 
> there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20207) Vectorization: Fix NULL / Wrong Results issues in Filter / Compare

2018-07-23 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-20207:

Attachment: HIVE-20207.05.patch

> Vectorization: Fix NULL / Wrong Results issues in Filter / Compare
> --
>
> Key: HIVE-20207
> URL: https://issues.apache.org/jira/browse/HIVE-20207
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-20207.01.patch, HIVE-20207.02.patch, 
> HIVE-20207.03.patch, HIVE-20207.04.patch, HIVE-20207.05.patch
>
>
> Write new UT tests that use random data and intentional isRepeating batches 
> to checks for NULL and Wrong Results for vectorized filter and compare.
> BUGS:
> 1) LongColLessLongColumn SIMD optimization do not work for very large 
> integers:
>  -7272907770454997143 < 8976171455044006767
>  outputVector[i] = (vector1[i] - vector2[i]) >>> 63;
>  Produces 0 instead of 1...
> Also, add DECIMAL_64 testing. Add missing DECIMAL/DECIMAL_64 Comparison and 
> IF vectorized expression classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20207) Vectorization: Fix NULL / Wrong Results issues in Filter / Compare

2018-07-23 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-20207:

Status: Patch Available  (was: In Progress)

> Vectorization: Fix NULL / Wrong Results issues in Filter / Compare
> --
>
> Key: HIVE-20207
> URL: https://issues.apache.org/jira/browse/HIVE-20207
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-20207.01.patch, HIVE-20207.02.patch, 
> HIVE-20207.03.patch, HIVE-20207.04.patch, HIVE-20207.05.patch
>
>
> Write new UT tests that use random data and intentional isRepeating batches 
> to checks for NULL and Wrong Results for vectorized filter and compare.
> BUGS:
> 1) LongColLessLongColumn SIMD optimization do not work for very large 
> integers:
>  -7272907770454997143 < 8976171455044006767
>  outputVector[i] = (vector1[i] - vector2[i]) >>> 63;
>  Produces 0 instead of 1...
> Also, add DECIMAL_64 testing. Add missing DECIMAL/DECIMAL_64 Comparison and 
> IF vectorized expression classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20207) Vectorization: Fix NULL / Wrong Results issues in Filter / Compare

2018-07-23 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-20207:

Status: In Progress  (was: Patch Available)

> Vectorization: Fix NULL / Wrong Results issues in Filter / Compare
> --
>
> Key: HIVE-20207
> URL: https://issues.apache.org/jira/browse/HIVE-20207
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-20207.01.patch, HIVE-20207.02.patch, 
> HIVE-20207.03.patch, HIVE-20207.04.patch
>
>
> Write new UT tests that use random data and intentional isRepeating batches 
> to checks for NULL and Wrong Results for vectorized filter and compare.
> BUGS:
> 1) LongColLessLongColumn SIMD optimization do not work for very large 
> integers:
>  -7272907770454997143 < 8976171455044006767
>  outputVector[i] = (vector1[i] - vector2[i]) >>> 63;
>  Produces 0 instead of 1...
> Also, add DECIMAL_64 testing. Add missing DECIMAL/DECIMAL_64 Comparison and 
> IF vectorized expression classes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20164) Murmur Hash : Make sure CTAS and IAS use correct bucketing version

2018-07-23 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-20164:
--
Attachment: HIVE-20164.8.patch

> Murmur Hash : Make sure CTAS and IAS use correct bucketing version
> --
>
> Key: HIVE-20164
> URL: https://issues.apache.org/jira/browse/HIVE-20164
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20164.1.patch, HIVE-20164.2.patch, 
> HIVE-20164.3.patch, HIVE-20164.4.patch, HIVE-20164.5.patch, 
> HIVE-20164.6.patch, HIVE-20164.7.patch, HIVE-20164.8.patch
>
>
> With the migration to Murmur hash, CTAS and IAS from old table version to new 
> table version does not work as intended and data is hashed using old hash 
> logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19891) inserting into external tables with custom partition directories may cause data loss

2018-07-23 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19891:

Attachment: HIVE-19891.07.patch

> inserting into external tables with custom partition directories may cause 
> data loss
> 
>
> Key: HIVE-19891
> URL: https://issues.apache.org/jira/browse/HIVE-19891
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19891.01.patch, HIVE-19891.02.patch, 
> HIVE-19891.03.patch, HIVE-19891.04.patch, HIVE-19891.05.patch, 
> HIVE-19891.06.patch, HIVE-19891.07.patch, HIVE-19891.patch
>
>
> tbl1 is just used as a prop to create data, could be an existing directory 
> for an external table.
> Due to weird behavior of LoadTableDesc (some ancient code for overriding old 
> partition path), custom partition path is overwritten after the query and the 
> data in it ceases being a part of the table (can be seen in desc formatted 
> output with masking commented out in QTestUtil)
> This affects branch-1 too, so it's pretty old.
> {noformat}drop table tbl1;
> CREATE TABLE tbl1 (index int, value int ) PARTITIONED BY ( created_date 
> string );
> insert into tbl1 partition(created_date='2018-02-01') VALUES (2, 2);
> CREATE external TABLE tbl2 (index int, value int ) PARTITIONED BY ( 
> created_date string );
> ALTER TABLE tbl2 ADD PARTITION(created_date='2018-02-01');
> ALTER TABLE tbl2 PARTITION(created_date='2018-02-01') SET LOCATION 
> 'file:/Users/sergey/git/hivegit/itests/qtest/target/warehouse/tbl1/created_date=2018-02-01';
> select * from tbl2;
> describe formatted tbl2 partition(created_date='2018-02-01');
> insert into tbl2 partition(created_date='2018-02-01') VALUES (1, 1);
> select * from tbl2;
> describe formatted tbl2 partition(created_date='2018-02-01');
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19532) merge master-txnstats branch

2018-07-23 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553257#comment-16553257
 ] 

Sergey Shelukhin commented on HIVE-19532:
-

Rebased again...

> merge master-txnstats branch
> 
>
> Key: HIVE-19532
> URL: https://issues.apache.org/jira/browse/HIVE-19532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, 
> HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, 
> HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.07.patch, 
> HIVE-19532.08.patch, HIVE-19532.09.patch, HIVE-19532.10.patch, 
> HIVE-19532.11.patch, HIVE-19532.12.patch, HIVE-19532.13.patch, 
> HIVE-19532.14.patch, HIVE-19532.15.patch, HIVE-19532.16.patch, 
> HIVE-19532.19.patch, HIVE-19532.22.patch, HIVE-19532.23.patch, 
> HIVE-19532.24.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19532) merge master-txnstats branch

2018-07-23 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19532:

Attachment: HIVE-19532.24.patch

> merge master-txnstats branch
> 
>
> Key: HIVE-19532
> URL: https://issues.apache.org/jira/browse/HIVE-19532
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Steve Yeom
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-19532.01.patch, HIVE-19532.01.prepatch, 
> HIVE-19532.02.patch, HIVE-19532.02.prepatch, HIVE-19532.03.patch, 
> HIVE-19532.04.patch, HIVE-19532.05.patch, HIVE-19532.07.patch, 
> HIVE-19532.08.patch, HIVE-19532.09.patch, HIVE-19532.10.patch, 
> HIVE-19532.11.patch, HIVE-19532.12.patch, HIVE-19532.13.patch, 
> HIVE-19532.14.patch, HIVE-19532.15.patch, HIVE-19532.16.patch, 
> HIVE-19532.19.patch, HIVE-19532.22.patch, HIVE-19532.23.patch, 
> HIVE-19532.24.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17683) Add explain locks command

2018-07-23 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553252#comment-16553252
 ] 

Eugene Koifman commented on HIVE-17683:
---

fixed

> Add explain locks  command
> ---
>
> Key: HIVE-17683
> URL: https://issues.apache.org/jira/browse/HIVE-17683
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Critical
> Attachments: HIVE-17683.01.patch, HIVE-17683.02.patch, 
> HIVE-17683.03.patch, HIVE-17683.04.patch, HIVE-17683.05.patch, 
> HIVE-17683.06.patch
>
>
> Explore if it's possible to add info about what locks will be asked for to 
> the query plan.
> Lock acquisition (for Acid Lock Manager) is done in 
> DbTxnManager.acquireLocks() which is called once the query starts running.  
> Would need to refactor that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20221) Increase column width for partition_params

2018-07-23 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20221:

Attachment: HIVE-20221.01.patch

> Increase column width for partition_params
> --
>
> Key: HIVE-20221
> URL: https://issues.apache.org/jira/browse/HIVE-20221
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20221.01.patch, HIVE-20221.01.patch
>
>
> HIVE-12274 have addressed almost all metastore columns; however it have left 
> out PARTITION_PARAMS; so in case of a partitioned tables the limits are still 
> there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20164) Murmur Hash : Make sure CTAS and IAS use correct bucketing version

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553243#comment-16553243
 ] 

Hive QA commented on HIVE-20164:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932749/HIVE-20164.7.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12800/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12800/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12800/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12932749/HIVE-20164.7.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12932749 - PreCommit-HIVE-Build

> Murmur Hash : Make sure CTAS and IAS use correct bucketing version
> --
>
> Key: HIVE-20164
> URL: https://issues.apache.org/jira/browse/HIVE-20164
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20164.1.patch, HIVE-20164.2.patch, 
> HIVE-20164.3.patch, HIVE-20164.4.patch, HIVE-20164.5.patch, 
> HIVE-20164.6.patch, HIVE-20164.7.patch
>
>
> With the migration to Murmur hash, CTAS and IAS from old table version to new 
> table version does not work as intended and data is hashed using old hash 
> logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20164) Murmur Hash : Make sure CTAS and IAS use correct bucketing version

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553240#comment-16553240
 ] 

Hive QA commented on HIVE-20164:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932749/HIVE-20164.7.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12798/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12798/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12798/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:33:10.264
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-12798/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:33:10.267
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 90d19ac HIVE-17683: Add explain locks  command (Igor 
Kryvenko via Eugene Koifman)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 90d19ac HIVE-17683: Add explain locks  command (Igor 
Kryvenko via Eugene Koifman)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:33:11.413
+ rm -rf ../yetus_PreCommit-HIVE-Build-12798
+ mkdir ../yetus_PreCommit-HIVE-Build-12798
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-12798
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12798/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/itests/src/test/resources/testconfiguration.properties: does not exist 
in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/TezCompiler.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/plan/TableDesc.java: does not 
exist in index
Going to apply patch with: git apply -p1
/data/hiveptest/working/scratch/build.patch:326: trailing whitespace.
Map 1 
/data/hiveptest/working/scratch/build.patch:336: trailing whitespace.
  sort order: 
/data/hiveptest/working/scratch/build.patch:342: trailing whitespace.
Reducer 2 
/data/hiveptest/working/scratch/build.patch:421: trailing whitespace.
Map 1 
/data/hiveptest/working/scratch/build.patch:431: trailing whitespace.
  sort order: 
warning: squelched 7 whitespace errors
warning: 12 lines add whitespace errors.
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc6271673448716857551.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc6271673448716857551.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc1128311160669016236.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must 

[jira] [Commented] (HIVE-20164) Murmur Hash : Make sure CTAS and IAS use correct bucketing version

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553242#comment-16553242
 ] 

Hive QA commented on HIVE-20164:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932749/HIVE-20164.7.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12799/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12799/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12799/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12932749/HIVE-20164.7.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12932749 - PreCommit-HIVE-Build

> Murmur Hash : Make sure CTAS and IAS use correct bucketing version
> --
>
> Key: HIVE-20164
> URL: https://issues.apache.org/jira/browse/HIVE-20164
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20164.1.patch, HIVE-20164.2.patch, 
> HIVE-20164.3.patch, HIVE-20164.4.patch, HIVE-20164.5.patch, 
> HIVE-20164.6.patch, HIVE-20164.7.patch
>
>
> With the migration to Murmur hash, CTAS and IAS from old table version to new 
> table version does not work as intended and data is hashed using old hash 
> logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20164) Murmur Hash : Make sure CTAS and IAS use correct bucketing version

2018-07-23 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553230#comment-16553230
 ] 

Jason Dere commented on HIVE-20164:
---

+1

> Murmur Hash : Make sure CTAS and IAS use correct bucketing version
> --
>
> Key: HIVE-20164
> URL: https://issues.apache.org/jira/browse/HIVE-20164
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20164.1.patch, HIVE-20164.2.patch, 
> HIVE-20164.3.patch, HIVE-20164.4.patch, HIVE-20164.5.patch, 
> HIVE-20164.6.patch, HIVE-20164.7.patch
>
>
> With the migration to Murmur hash, CTAS and IAS from old table version to new 
> table version does not work as intended and data is hashed using old hash 
> logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20212) Hiveserver2 in http mode emitting metric default.General.open_connections incorrectly

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553229#comment-16553229
 ] 

Hive QA commented on HIVE-20212:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932738/HIVE-20212.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12797/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12797/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12797/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:30:36.252
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-12797/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:30:36.255
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 90d19ac HIVE-17683: Add explain locks  command (Igor 
Kryvenko via Eugene Koifman)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 90d19ac HIVE-17683: Add explain locks  command (Igor 
Kryvenko via Eugene Koifman)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:30:36.782
+ rm -rf ../yetus_PreCommit-HIVE-Build-12797
+ mkdir ../yetus_PreCommit-HIVE-Build-12797
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-12797
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12797/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java:
 does not exist in index
Going to apply patch with: git apply -p1
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc7199010824310911348.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc7199010824310911348.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc3427408436082525954.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
log4j:WARN No appenders could be found for logger (DataNucleus.Persistence).
log4j:WARN Please initialize the log4j system properly.
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer completed with success for 41 classes.
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveLexer.java
 does not exist: must build 

[jira] [Commented] (HIVE-16882) Improvements For Avro SerDe Package

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553223#comment-16553223
 ] 

Hive QA commented on HIVE-16882:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932737/HIVE-16882.8.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12796/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12796/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12796/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:29:30.866
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-12796/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:29:30.868
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 90d19ac HIVE-17683: Add explain locks  command (Igor 
Kryvenko via Eugene Koifman)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 90d19ac HIVE-17683: Add explain locks  command (Igor 
Kryvenko via Eugene Koifman)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:29:32.027
+ rm -rf ../yetus_PreCommit-HIVE-Build-12796
+ mkdir ../yetus_PreCommit-HIVE-Build-12796
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-12796
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12796/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java: does 
not exist in index
error: 
a/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroLazyObjectInspector.java:
 does not exist in index
error: a/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java: does 
not exist in index
error: a/serde/src/java/org/apache/hadoop/hive/serde2/avro/InstanceCache.java: 
does not exist in index
error: patch failed: 
serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java:25
Falling back to three-way merge...
Applied patch to 
'serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java' with 
conflicts.
Going to apply patch with: git apply -p1
error: patch failed: 
serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java:25
Falling back to three-way merge...
Applied patch to 
'serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java' with 
conflicts.
U serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-12796
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12932737 - PreCommit-HIVE-Build

> Improvements For Avro SerDe Package
> ---
>
> Key: HIVE-16882
> URL: https://issues.apache.org/jira/browse/HIVE-16882
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, 
> HIVE-16882.3.patch, HIVE-16882.4.patch, HIVE-16882.5.patch, 
> HIVE-16882.6.patch, HIVE-16882.7.patch, HIVE-16882.8.patch
>
>
> # Use SLF4J parameter DEBUG logging
> # Use re-usable libraries where appropriate
> # Use enhanced for loops where appropriate
> # Fix several minor check-style error
> # 

[jira] [Commented] (HIVE-19846) Removed Deprecated Calls From FileUtils-getJarFilesByPath

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553218#comment-16553218
 ] 

Hive QA commented on HIVE-19846:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932736/HIVE-19846.5.patch.txt

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12795/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12795/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12795/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:26:56.147
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-12795/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:26:56.150
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 90d19ac HIVE-17683: Add explain locks  command (Igor 
Kryvenko via Eugene Koifman)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 90d19ac HIVE-17683: Add explain locks  command (Igor 
Kryvenko via Eugene Koifman)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:26:57.284
+ rm -rf ../yetus_PreCommit-HIVE-Build-12795
+ mkdir ../yetus_PreCommit-HIVE-Build-12795
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-12795
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12795/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/common/src/java/org/apache/hadoop/hive/common/FileUtils.java: does not 
exist in index
Going to apply patch with: git apply -p1
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc4614551273041329636.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc4614551273041329636.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc7660051489212334571.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
log4j:WARN No appenders could be found for logger (DataNucleus.Persistence).
log4j:WARN Please initialize the log4j system properly.
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer completed with success for 41 classes.
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveLexer.java
 does not exist: must build 

[jira] [Commented] (HIVE-17683) Add explain locks command

2018-07-23 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553217#comment-16553217
 ] 

Sergey Shelukhin commented on HIVE-17683:
-

I think this breaks the build, ExplainLockDesc is missing from the commit.

> Add explain locks  command
> ---
>
> Key: HIVE-17683
> URL: https://issues.apache.org/jira/browse/HIVE-17683
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Critical
> Attachments: HIVE-17683.01.patch, HIVE-17683.02.patch, 
> HIVE-17683.03.patch, HIVE-17683.04.patch, HIVE-17683.05.patch, 
> HIVE-17683.06.patch
>
>
> Explore if it's possible to add info about what locks will be asked for to 
> the query plan.
> Lock acquisition (for Acid Lock Manager) is done in 
> DbTxnManager.acquireLocks() which is called once the query starts running.  
> Would need to refactor that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20221) Increase column width for partition_params

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553215#comment-16553215
 ] 

Hive QA commented on HIVE-20221:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932734/HIVE-20221.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12794/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12794/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12794/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:24:12.724
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-12794/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:24:12.727
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   bed17e5..90d19ac  master -> origin/master
+ git reset --hard HEAD
HEAD is now at bed17e5 HIVE-20056: SparkPartitionPruner shouldn't be triggered 
by Spark tasks (Sahil Takiar, reviewed by Rui Li)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at 90d19ac HIVE-17683: Add explain locks  command (Igor 
Kryvenko via Eugene Koifman)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-07-23 18:24:14.393
+ rm -rf ../yetus_PreCommit-HIVE-Build-12794
+ mkdir ../yetus_PreCommit-HIVE-Build-12794
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-12794
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12794/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc5192042656484089507.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc5192042656484089507.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc4665511006300871134.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
log4j:WARN No appenders could be found for logger (DataNucleus.Persistence).
log4j:WARN Please initialize the log4j system properly.
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer completed with success for 41 classes.
ANTLR Parser Generator  Version 3.5.2
Output file 

[jira] [Commented] (HIVE-20032) Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553213#comment-16553213
 ] 

Hive QA commented on HIVE-20032:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932725/HIVE-20032.8.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 14681 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testCancelRenewTokenFlow 
(batchId=264)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testConnection (batchId=264)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testIsValid (batchId=264)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testIsValidNeg (batchId=264)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testNegativeProxyAuth 
(batchId=264)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testNegativeTokenAuth 
(batchId=264)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testProxyAuth (batchId=264)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testRenewDelegationToken 
(batchId=264)
org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testTokenAuth (batchId=264)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12793/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12793/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12793/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12932725 - PreCommit-HIVE-Build

> Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled
> -
>
> Key: HIVE-20032
> URL: https://issues.apache.org/jira/browse/HIVE-20032
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-20032.1.patch, HIVE-20032.2.patch, 
> HIVE-20032.3.patch, HIVE-20032.4.patch, HIVE-20032.5.patch, 
> HIVE-20032.6.patch, HIVE-20032.7.patch, HIVE-20032.8.patch
>
>
> Follow up on HIVE-15104, if we don't enable RDD cacheing or groupByShuffles, 
> then we don't need to serialize the hashCode when shuffling data in HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17683) Add explain locks command

2018-07-23 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553185#comment-16553185
 ] 

Eugene Koifman commented on HIVE-17683:
---

[~ikryvenko], could you please make a 3.x patch for this - I think it would 
useful to users

> Add explain locks  command
> ---
>
> Key: HIVE-17683
> URL: https://issues.apache.org/jira/browse/HIVE-17683
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Critical
> Attachments: HIVE-17683.01.patch, HIVE-17683.02.patch, 
> HIVE-17683.03.patch, HIVE-17683.04.patch, HIVE-17683.05.patch, 
> HIVE-17683.06.patch
>
>
> Explore if it's possible to add info about what locks will be asked for to 
> the query plan.
> Lock acquisition (for Acid Lock Manager) is done in 
> DbTxnManager.acquireLocks() which is called once the query starts running.  
> Would need to refactor that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20164) Murmur Hash : Make sure CTAS and IAS use correct bucketing version

2018-07-23 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-20164:
--
Attachment: HIVE-20164.7.patch

> Murmur Hash : Make sure CTAS and IAS use correct bucketing version
> --
>
> Key: HIVE-20164
> URL: https://issues.apache.org/jira/browse/HIVE-20164
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20164.1.patch, HIVE-20164.2.patch, 
> HIVE-20164.3.patch, HIVE-20164.4.patch, HIVE-20164.5.patch, 
> HIVE-20164.6.patch, HIVE-20164.7.patch
>
>
> With the migration to Murmur hash, CTAS and IAS from old table version to new 
> table version does not work as intended and data is hashed using old hash 
> logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17683) Add explain locks command

2018-07-23 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553179#comment-16553179
 ] 

Eugene Koifman commented on HIVE-17683:
---

committed to master

thanks Igor for the contribution

> Add explain locks  command
> ---
>
> Key: HIVE-17683
> URL: https://issues.apache.org/jira/browse/HIVE-17683
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Critical
> Attachments: HIVE-17683.01.patch, HIVE-17683.02.patch, 
> HIVE-17683.03.patch, HIVE-17683.04.patch, HIVE-17683.05.patch, 
> HIVE-17683.06.patch
>
>
> Explore if it's possible to add info about what locks will be asked for to 
> the query plan.
> Lock acquisition (for Acid Lock Manager) is done in 
> DbTxnManager.acquireLocks() which is called once the query starts running.  
> Would need to refactor that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17683) Add explain locks command

2018-07-23 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-17683:
--
Summary: Add explain locks  command  (was: Annotate Query Plan with 
locking information)

> Add explain locks  command
> ---
>
> Key: HIVE-17683
> URL: https://issues.apache.org/jira/browse/HIVE-17683
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Critical
> Attachments: HIVE-17683.01.patch, HIVE-17683.02.patch, 
> HIVE-17683.03.patch, HIVE-17683.04.patch, HIVE-17683.05.patch, 
> HIVE-17683.06.patch
>
>
> Explore if it's possible to add info about what locks will be asked for to 
> the query plan.
> Lock acquisition (for Acid Lock Manager) is done in 
> DbTxnManager.acquireLocks() which is called once the query starts running.  
> Would need to refactor that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19636) Fix druidmini_dynamic_partition.q slowness

2018-07-23 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553177#comment-16553177
 ] 

Vineet Garg commented on HIVE-19636:


[~nishantbangarwa] How do I check broker logs? Do you know if we it with ptest 
logs?
[~bslim] I believe timeout for a batch is 40mins. This was observed in ptest 
run. try to reproduce it locally.

> Fix druidmini_dynamic_partition.q slowness
> --
>
> Key: HIVE-19636
> URL: https://issues.apache.org/jira/browse/HIVE-19636
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Prasanth Jayachandran
>Priority: Major
> Attachments: hive.12762.logs.log
>
>
> druidmini_dynamic_partition.q runs for >5 mins



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20164) Murmur Hash : Make sure CTAS and IAS use correct bucketing version

2018-07-23 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-20164:
--
Attachment: HIVE-20164.6.patch

> Murmur Hash : Make sure CTAS and IAS use correct bucketing version
> --
>
> Key: HIVE-20164
> URL: https://issues.apache.org/jira/browse/HIVE-20164
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20164.1.patch, HIVE-20164.2.patch, 
> HIVE-20164.3.patch, HIVE-20164.4.patch, HIVE-20164.5.patch, HIVE-20164.6.patch
>
>
> With the migration to Murmur hash, CTAS and IAS from old table version to new 
> table version does not work as intended and data is hashed using old hash 
> logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20012) Implement SQL Standard Date and Timestamp Functions

2018-07-23 Thread Shawn Weeks (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553158#comment-16553158
 ] 

Shawn Weeks commented on HIVE-20012:


[~bharos92] Sorry for the delay. If we make it behave like Oracle than it will 
either accept two strings, or some type that is cast-able to a timestamp or 
date. For example in Oracle to_timestamp will accept a date as a single 
parameter and cast it to a timestamp. All of my use cases though are for 
explicit converting a string to timestamp or date.

> Implement SQL Standard Date and Timestamp Functions
> ---
>
> Key: HIVE-20012
> URL: https://issues.apache.org/jira/browse/HIVE-20012
> Project: Hive
>  Issue Type: New Feature
>Reporter: Shawn Weeks
>Priority: Minor
>
> I've looked around and haven't seen an existing ticket on this. Many times 
> you need to convert from arbitrary string formats to a date or a timestamp. 
> The current method using the unix_timestamp function doesn't support 
> milliseconds and is a bit clunky. I propose we implement a to_date and 
> to_timestamp function that behave like the following. It may also be useful 
> for the to_timestamp function to behave like the existing to_date function 
> and convert Hive's default timestamp string into an actual timestamp.
> {code:java}
> select to_date('01-01-2000','dd-MM-');
> 2000-01-01
> select to_timestamp('01-01-2000 13:00:00.000','dd-MM- HH:mm:ss.SSS')
> 2000-01-01 13:00:00.000{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20032) Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553135#comment-16553135
 ] 

Hive QA commented on HIVE-20032:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 64 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
18s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} spark-client in master has 10 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} kryo-registrator in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
1s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
14s{color} | {color:red} kryo-registrator in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 14s{color} 
| {color:red} kryo-registrator in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 2 new + 20 
unchanged - 0 fixed = 22 total (was 20) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} kryo-registrator: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 2 new + 16 unchanged - 0 fixed 
= 18 total (was 16) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} spark-client: The patch generated 1 new + 27 unchanged 
- 0 fixed = 28 total (was 27) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} kryo-registrator in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12793/dev-support/hive-personality.sh
 |
| git revision | master / bed17e5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12793/yetus/patch-mvninstall-kryo-registrator.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12793/yetus/patch-mvninstall-ql.txt
 |
| compile | 

[jira] [Updated] (HIVE-20212) Hiveserver2 in http mode emitting metric default.General.open_connections incorrectly

2018-07-23 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20212:
---
Attachment: HIVE-20212.01.patch

> Hiveserver2 in http mode emitting metric default.General.open_connections 
> incorrectly
> -
>
> Key: HIVE-20212
> URL: https://issues.apache.org/jira/browse/HIVE-20212
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Minor
> Attachments: HIVE-20212.01.patch, HIVE-20212.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16882) Improvements For Avro SerDe Package

2018-07-23 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-16882:
---
Attachment: HIVE-16882.8.patch

> Improvements For Avro SerDe Package
> ---
>
> Key: HIVE-16882
> URL: https://issues.apache.org/jira/browse/HIVE-16882
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, 
> HIVE-16882.3.patch, HIVE-16882.4.patch, HIVE-16882.5.patch, 
> HIVE-16882.6.patch, HIVE-16882.7.patch, HIVE-16882.8.patch
>
>
> # Use SLF4J parameter DEBUG logging
> # Use re-usable libraries where appropriate
> # Use enhanced for loops where appropriate
> # Fix several minor check-style error
> # Small performance enhancements in InstanceCache



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16882) Improvements For Avro SerDe Package

2018-07-23 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-16882:
---
Status: Patch Available  (was: Open)

Re-submitting last patch to try to get a clean unit test run

> Improvements For Avro SerDe Package
> ---
>
> Key: HIVE-16882
> URL: https://issues.apache.org/jira/browse/HIVE-16882
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, 
> HIVE-16882.3.patch, HIVE-16882.4.patch, HIVE-16882.5.patch, 
> HIVE-16882.6.patch, HIVE-16882.7.patch, HIVE-16882.8.patch
>
>
> # Use SLF4J parameter DEBUG logging
> # Use re-usable libraries where appropriate
> # Use enhanced for loops where appropriate
> # Fix several minor check-style error
> # Small performance enhancements in InstanceCache



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16882) Improvements For Avro SerDe Package

2018-07-23 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-16882:
---
Status: Open  (was: Patch Available)

> Improvements For Avro SerDe Package
> ---
>
> Key: HIVE-16882
> URL: https://issues.apache.org/jira/browse/HIVE-16882
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-16882.1.patch, HIVE-16882.2.patch, 
> HIVE-16882.3.patch, HIVE-16882.4.patch, HIVE-16882.5.patch, 
> HIVE-16882.6.patch, HIVE-16882.7.patch, HIVE-16882.8.patch
>
>
> # Use SLF4J parameter DEBUG logging
> # Use re-usable libraries where appropriate
> # Use enhanced for loops where appropriate
> # Fix several minor check-style error
> # Small performance enhancements in InstanceCache



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19846) Removed Deprecated Calls From FileUtils-getJarFilesByPath

2018-07-23 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-19846:
---
Status: Open  (was: Patch Available)

> Removed Deprecated Calls From FileUtils-getJarFilesByPath
> -
>
> Key: HIVE-19846
> URL: https://issues.apache.org/jira/browse/HIVE-19846
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-19846.1.patch, HIVE-19846.2.patch, 
> HIVE-19846.3.patch, HIVE-19846.4.patch, HIVE-19846.5.patch.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19846) Removed Deprecated Calls From FileUtils-getJarFilesByPath

2018-07-23 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-19846:
---
Attachment: HIVE-19846.5.patch.txt

> Removed Deprecated Calls From FileUtils-getJarFilesByPath
> -
>
> Key: HIVE-19846
> URL: https://issues.apache.org/jira/browse/HIVE-19846
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-19846.1.patch, HIVE-19846.2.patch, 
> HIVE-19846.3.patch, HIVE-19846.4.patch, HIVE-19846.5.patch.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19846) Removed Deprecated Calls From FileUtils-getJarFilesByPath

2018-07-23 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-19846:
---
Status: Patch Available  (was: Open)

re-attaching latest patch with hopes that unit tests pass.

> Removed Deprecated Calls From FileUtils-getJarFilesByPath
> -
>
> Key: HIVE-19846
> URL: https://issues.apache.org/jira/browse/HIVE-19846
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-19846.1.patch, HIVE-19846.2.patch, 
> HIVE-19846.3.patch, HIVE-19846.4.patch, HIVE-19846.5.patch.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20221) Increase column width for partition_params

2018-07-23 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20221:

Status: Patch Available  (was: Open)

> Increase column width for partition_params
> --
>
> Key: HIVE-20221
> URL: https://issues.apache.org/jira/browse/HIVE-20221
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20221.01.patch
>
>
> HIVE-12274 have addressed almost all metastore columns; however it have left 
> out PARTITION_PARAMS; so in case of a partitioned tables the limits are still 
> there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20221) Increase column width for partition_params

2018-07-23 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20221:

Attachment: HIVE-20221.01.patch

> Increase column width for partition_params
> --
>
> Key: HIVE-20221
> URL: https://issues.apache.org/jira/browse/HIVE-20221
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20221.01.patch
>
>
> HIVE-12274 have addressed almost all metastore columns; however it have left 
> out PARTITION_PARAMS; so in case of a partitioned tables the limits are still 
> there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17683) Annotate Query Plan with locking information

2018-07-23 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553072#comment-16553072
 ] 

Eugene Koifman commented on HIVE-17683:
---

+1

> Annotate Query Plan with locking information
> 
>
> Key: HIVE-17683
> URL: https://issues.apache.org/jira/browse/HIVE-17683
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Critical
> Attachments: HIVE-17683.01.patch, HIVE-17683.02.patch, 
> HIVE-17683.03.patch, HIVE-17683.04.patch, HIVE-17683.05.patch, 
> HIVE-17683.06.patch
>
>
> Explore if it's possible to add info about what locks will be asked for to 
> the query plan.
> Lock acquisition (for Acid Lock Manager) is done in 
> DbTxnManager.acquireLocks() which is called once the query starts running.  
> Would need to refactor that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19636) Fix druidmini_dynamic_partition.q slowness

2018-07-23 Thread slim bouguerra (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553065#comment-16553065
 ] 

slim bouguerra commented on HIVE-19636:
---

[~vgarg] what is the timeout? are you able to reproduce this locally or this is 
the build machine?


> Fix druidmini_dynamic_partition.q slowness
> --
>
> Key: HIVE-19636
> URL: https://issues.apache.org/jira/browse/HIVE-19636
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Prasanth Jayachandran
>Priority: Major
> Attachments: hive.12762.logs.log
>
>
> druidmini_dynamic_partition.q runs for >5 mins



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20222) Enable Skew Join Optimization For Outer Joins

2018-07-23 Thread Gopal V (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-20222:
---
Labels: mapreduce-feature  (was: )

> Enable Skew Join Optimization For Outer Joins
> -
>
> Key: HIVE-20222
> URL: https://issues.apache.org/jira/browse/HIVE-20222
> Project: Hive
>  Issue Type: New Feature
>  Components: Logical Optimizer
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Priority: Major
>  Labels: mapreduce-feature
>
> {code}
> // We are trying to adding map joins to handle skew keys, and map join right
> // now does not work with outer joins
> if (!GenMRSkewJoinProcessor.skewJoinEnabled(parseCtx.getConf(), joinOp))
> return;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20032) Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled

2018-07-23 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-20032:

Attachment: HIVE-20032.8.patch

> Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled
> -
>
> Key: HIVE-20032
> URL: https://issues.apache.org/jira/browse/HIVE-20032
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-20032.1.patch, HIVE-20032.2.patch, 
> HIVE-20032.3.patch, HIVE-20032.4.patch, HIVE-20032.5.patch, 
> HIVE-20032.6.patch, HIVE-20032.7.patch, HIVE-20032.8.patch
>
>
> Follow up on HIVE-15104, if we don't enable RDD cacheing or groupByShuffles, 
> then we don't need to serialize the hashCode when shuffling data in HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20121) investigate issues with TestReplicationScenariosAcidTables

2018-07-23 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552993#comment-16552993
 ] 

Zoltan Haindrich commented on HIVE-20121:
-

oh...I see; so far I assumed that the increased execution time was the main 
symptomI'm not sure if this test is supposed to take that long...or does it?
[~sankarh] [~anishek]

I think if you have something which could fix handle "Unable to shutdown 
metastore client" we should commit that...that happened a lot lately...could 
you please open a ticket for it and upload it?

> investigate issues with TestReplicationScenariosAcidTables
> --
>
> Key: HIVE-20121
> URL: https://issues.apache.org/jira/browse/HIVE-20121
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-20121.01.patch
>
>
> [~djaiswal] have noticed that somehow ptest execution are getting more and 
> more problems lately... it seems to me that these problems are coming from 
> this test
> {code}
> 2018-07-08T22:07:33,461 DEBUG [main] metastore.HiveMetaStoreClient: Unable to 
> shutdown metastore client. Will try closing transport directly.
> org.apache.thrift.transport.TTransportException: Cannot write to null 
> outputStream
> {code}
> some links to more or less recent logs:
> http://104.198.109.242/logs/PreCommit-HIVE-Build-12481/failed/240_UTBatch_itests__hive-unit_9_tests/maven-test.txt
> the hive.log is ~200M:
> http://104.198.109.242/logs/PreCommit-HIVE-Build-12481/failed/240_UTBatch_itests__hive-unit_9_tests/logs/hive.log



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20032) Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552984#comment-16552984
 ] 

Hive QA commented on HIVE-20032:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932706/HIVE-20032.7.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14681 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestJdbcWithMiniHS2ErasureCoding.testDescribeErasureCoding 
(batchId=251)
org.apache.hive.jdbc.TestJdbcWithMiniHS2ErasureCoding.testExplainErasureCoding 
(batchId=251)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12792/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12792/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12792/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12932706 - PreCommit-HIVE-Build

> Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled
> -
>
> Key: HIVE-20032
> URL: https://issues.apache.org/jira/browse/HIVE-20032
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-20032.1.patch, HIVE-20032.2.patch, 
> HIVE-20032.3.patch, HIVE-20032.4.patch, HIVE-20032.5.patch, 
> HIVE-20032.6.patch, HIVE-20032.7.patch
>
>
> Follow up on HIVE-15104, if we don't enable RDD cacheing or groupByShuffles, 
> then we don't need to serialize the hashCode when shuffling data in HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20182) Backport HIVE-20067 to branch-3

2018-07-23 Thread Daniel Voros (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552968#comment-16552968
 ] 

Daniel Voros commented on HIVE-20182:
-

Thank you [~kgyrtkirk]!

> Backport HIVE-20067 to branch-3
> ---
>
> Key: HIVE-20182
> URL: https://issues.apache.org/jira/browse/HIVE-20182
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-20182.1.branch-3.patch, HIVE-20182.2-branch-3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20180) Backport HIVE-19759 to branch-3

2018-07-23 Thread Daniel Voros (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552964#comment-16552964
 ] 

Daniel Voros commented on HIVE-20180:
-

Thank you [~kgyrtkirk]!

> Backport HIVE-19759 to branch-3
> ---
>
> Key: HIVE-20180
> URL: https://issues.apache.org/jira/browse/HIVE-20180
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Affects Versions: 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-20180.1.branch-3.patch, HIVE-20180.2-branch-3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20181) Backport HIVE-20045 to branch-3

2018-07-23 Thread Daniel Voros (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552966#comment-16552966
 ] 

Daniel Voros commented on HIVE-20181:
-

Thank you [~kgyrtkirk]!

> Backport HIVE-20045 to branch-3
> ---
>
> Key: HIVE-20181
> URL: https://issues.apache.org/jira/browse/HIVE-20181
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-20181.1.branch-3.patch, HIVE-20181.2-branch-3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20032) Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552957#comment-16552957
 ] 

Hive QA commented on HIVE-20032:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 64 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
5s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} spark-client in master has 10 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} kryo-registrator in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
54s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} kryo-registrator in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 13s{color} 
| {color:red} kryo-registrator in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 2 new + 20 
unchanged - 0 fixed = 22 total (was 20) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} kryo-registrator: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 2 new + 16 unchanged - 0 fixed 
= 18 total (was 16) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} spark-client: The patch generated 1 new + 27 unchanged 
- 0 fixed = 28 total (was 27) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} kryo-registrator in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12792/dev-support/hive-personality.sh
 |
| git revision | master / bed17e5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12792/yetus/patch-mvninstall-kryo-registrator.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12792/yetus/patch-mvninstall-ql.txt
 |
| compile | 

[jira] [Updated] (HIVE-20056) SparkPartitionPruner shouldn't be triggered by Spark tasks

2018-07-23 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-20056:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to master, thanks Rui for the review!

> SparkPartitionPruner shouldn't be triggered by Spark tasks
> --
>
> Key: HIVE-20056
> URL: https://issues.apache.org/jira/browse/HIVE-20056
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-20056.1.patch, HIVE-20056.2.patch
>
>
> It looks like {{SparkDynamicPartitionPruner}} is being called by every Spark 
> task because it gets created whenever {{getRecordReader}} is called on the 
> associated {{InputFormat}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19927) Last Repl ID set by bootstrap dump is incorrect and may cause data loss if have ACID/MM tables.

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552917#comment-16552917
 ] 

Hive QA commented on HIVE-19927:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932695/HIVE-19927.02.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14681 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12791/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12791/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12791/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12932695 - PreCommit-HIVE-Build

> Last Repl ID set by bootstrap dump is incorrect and may cause data loss if 
> have ACID/MM tables.
> ---
>
> Key: HIVE-19927
> URL: https://issues.apache.org/jira/browse/HIVE-19927
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.1.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Attachments: HIVE-19927.01.patch, HIVE-19927.02.patch
>
>
> During bootstrap dump of ACID tables, let's consider the below sequence.
> - Current session (REPL DUMP), Open txn (Txn1) - Event-10
> - Another session (Session-2), Open txn (Txn2) - Event-11
> - Session-2 -> Insert data (T1.D1) to ACID table. - Event-12
> - Get lastReplId = last event ID logged. (Event-12)
> - Session-2 -> Commit Txn (Txn2) - Event-13
> - Dump ACID tables based on validTxnList based on Txn1. --> This step skips 
> all the data written by txns > Txn1. So, T1.D1 will be missing.
> - Commit Txn (Txn1)
> - REPL LOAD from bootstrap dump will skip T1.D1.
> - Incremental REPL DUMP will start from Event-13 and hence lose Txn2 which is 
> opened after Txn1. So, data T1.D1 will be lost for ever.
> Proposed to capture the lastReplId of bootstrap before opening current txn 
> (Txn1) and store it in Driver context and use it for dump.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20168) ReduceSinkOperator Logging Hidden

2018-07-23 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552899#comment-16552899
 ] 

BELUGA BEHR commented on HIVE-20168:


[~bharos92] Yes.  Thanks for pointing that out.  I've updated my comment.

And yes, the closeOp should also be INFO:

{code}
if (LOG.isTraceEnabled()) {
  LOG.info(toString() + ": records written - " + numRows);
}

// --->

LOG.info("{}: records written - {}", this, numRows);
{code}

> ReduceSinkOperator Logging Hidden
> -
>
> Key: HIVE-20168
> URL: https://issues.apache.org/jira/browse/HIVE-20168
> Project: Hive
>  Issue Type: Bug
>  Components: Operators
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
>  Labels: newbie, noob
>
> [https://github.com/apache/hive/blob/ac6b2a3fb195916e22b2e5f465add2ffbcdc7430/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java]
>  
> {code:java}
> if (LOG.isTraceEnabled()) {
>   if (numRows == cntr) {
> cntr = logEveryNRows == 0 ? cntr * 10 : numRows + logEveryNRows;
> if (cntr < 0 || numRows < 0) {
>   cntr = 0;
>   numRows = 1;
> }
> LOG.info(toString() + ": records written - " + numRows);
>   }
> }
> ...
> if (LOG.isTraceEnabled()) {
>   LOG.info(toString() + ": records written - " + numRows);
> }
> {code}
> There are logging guards here checking for TRACE level debugging but the 
> logging is actually INFO.  This is important logging for detecting data skew. 
>  Please change guards to check for INFO... or I would prefer that the guards 
> are removed altogether since it's very rare that a service is running with 
> only WARN level logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-20168) ReduceSinkOperator Logging Hidden

2018-07-23 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543294#comment-16543294
 ] 

BELUGA BEHR edited comment on HIVE-20168 at 7/23/18 1:57 PM:
-

Actually... don't remove anything... just change the guards to check for INFO 
level logging so that it matches the {{MapOperator}} implementation.

Though you should do:

{code}
if (LOG.isInfoEnabled()) {
  LOG.info(toString() + ": records written - " + numRows);
}

// ---

LOG.info("{} records written - {}", this, numRows);

{code}


was (Author: belugabehr):
Actually... don't remove anything... just change the guards to check for INFO 
level logging so that it matches the {{MapOperator}} implementation.

Though you should do:

{code}
if (LOG.isTraceEnabled()) {
  LOG.info(toString() + ": records written - " + numRows);
}

// ---

LOG.info("{} records written - {}", this, numRows);

{code}

> ReduceSinkOperator Logging Hidden
> -
>
> Key: HIVE-20168
> URL: https://issues.apache.org/jira/browse/HIVE-20168
> Project: Hive
>  Issue Type: Bug
>  Components: Operators
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
>  Labels: newbie, noob
>
> [https://github.com/apache/hive/blob/ac6b2a3fb195916e22b2e5f465add2ffbcdc7430/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java]
>  
> {code:java}
> if (LOG.isTraceEnabled()) {
>   if (numRows == cntr) {
> cntr = logEveryNRows == 0 ? cntr * 10 : numRows + logEveryNRows;
> if (cntr < 0 || numRows < 0) {
>   cntr = 0;
>   numRows = 1;
> }
> LOG.info(toString() + ": records written - " + numRows);
>   }
> }
> ...
> if (LOG.isTraceEnabled()) {
>   LOG.info(toString() + ": records written - " + numRows);
> }
> {code}
> There are logging guards here checking for TRACE level debugging but the 
> logging is actually INFO.  This is important logging for detecting data skew. 
>  Please change guards to check for INFO... or I would prefer that the guards 
> are removed altogether since it's very rare that a service is running with 
> only WARN level logging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20032) Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled

2018-07-23 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-20032:

Attachment: HIVE-20032.7.patch

> Don't serialize hashCode when groupByShuffle and RDD cacheing is disabled
> -
>
> Key: HIVE-20032
> URL: https://issues.apache.org/jira/browse/HIVE-20032
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-20032.1.patch, HIVE-20032.2.patch, 
> HIVE-20032.3.patch, HIVE-20032.4.patch, HIVE-20032.5.patch, 
> HIVE-20032.6.patch, HIVE-20032.7.patch
>
>
> Follow up on HIVE-15104, if we don't enable RDD cacheing or groupByShuffles, 
> then we don't need to serialize the hashCode when shuffling data in HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19927) Last Repl ID set by bootstrap dump is incorrect and may cause data loss if have ACID/MM tables.

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552882#comment-16552882
 ] 

Hive QA commented on HIVE-19927:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} itests/hive-unit: The patch generated 21 new + 166 
unchanged - 0 fixed = 187 total (was 166) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 1 new + 258 unchanged - 1 
fixed = 259 total (was 259) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12791/dev-support/hive-personality.sh
 |
| git revision | master / 6b15816 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12791/yetus/branch-findbugs-standalone-metastore_metastore-server.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12791/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12791/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12791/yetus/patch-findbugs-standalone-metastore_metastore-server.txt
 |
| modules | C: itests/hive-unit ql standalone-metastore/metastore-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12791/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Last Repl ID set by bootstrap dump is incorrect and may cause data loss if 
> have ACID/MM tables.
> ---
>
> Key: HIVE-19927
> URL: 

[jira] [Commented] (HIVE-19267) Replicate ACID/MM tables write operations.

2018-07-23 Thread mahesh kumar behera (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552881#comment-16552881
 ] 

mahesh kumar behera commented on HIVE-19267:


[~sankarh]
in the last commit the auto gen file was chnaged instead of .thrift file. I 
have fixed it by adding it to the .thrift file, so these chnages are shown in 
this patch.

As per the latest code change, newFiles should be set to null. Will check if 
its required in master branch also.

> Replicate ACID/MM tables write operations.
> --
>
> Key: HIVE-19267
> URL: https://issues.apache.org/jira/browse/HIVE-19267
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Fix For: 4.0.0
>
> Attachments: HIVE-19267.01-branch-3.patch, HIVE-19267.01.patch, 
> HIVE-19267.02-branch-3.patch, HIVE-19267.02.patch, HIVE-19267.03.patch, 
> HIVE-19267.04.patch, HIVE-19267.05.patch, HIVE-19267.06.patch, 
> HIVE-19267.07.patch, HIVE-19267.08.patch, HIVE-19267.09.patch, 
> HIVE-19267.10.patch, HIVE-19267.11.patch, HIVE-19267.12.patch, 
> HIVE-19267.13.patch, HIVE-19267.14.patch, HIVE-19267.15.patch, 
> HIVE-19267.16.patch, HIVE-19267.17.patch, HIVE-19267.18.patch, 
> HIVE-19267.19.patch, HIVE-19267.20.patch, HIVE-19267.21.patch, 
> HIVE-19267.22.patch
>
>
>  
> h1. Replicate ACID write Events
>  * Create new EVENT_WRITE event with related message format to log the write 
> operations with in a txn along with data associated.
>  * Log this event when perform any writes (insert into, insert overwrite, 
> load table, delete, update, merge, truncate) on table/partition.
>  * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple 
> partitions, then need to log one event per partition.
>  * DbNotificationListener should log this type of event to special metastore 
> table named "MTxnWriteNotificationLog".
>  * This table should maintain a map of txn ID against list of 
> tables/partitions written by given txn.
>  * The entry for a given txn should be removed by the cleaner thread that 
> removes the expired events from EventNotificationTable.
> h1. Replicate Commit Txn operation (with writes)
> Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions 
> modified within the txn.
> *Source warehouse:*
>  * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" 
> metastore table to consolidate the list of tables/partitions modified within 
> this txn scope.
>  * Based on the list of tables/partitions modified and table Write ID, need 
> to compute the list of delta files added by this txn.
>  * Repl dump should read this message and dump the metadata and delta files 
> list.
> *Target warehouse:*
>  * Ensure snapshot isolation at target for on-going read txns which shouldn't 
> view the data replicated from committed txn. (Ensured with open and allocate 
> write ID events).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20121) investigate issues with TestReplicationScenariosAcidTables

2018-07-23 Thread mahesh kumar behera (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552868#comment-16552868
 ] 

mahesh kumar behera commented on HIVE-20121:


[~kgyrtkirk]
The issue with this test taking more time is because of  number of test cases 
in a single file. So i have split it into two files. 

The " Unable to shutdown metastore client' error is coming because of call to 
syncMetaStoreClient.close(). But even after fixing this, i did not see much 
improvements in execution time.

> investigate issues with TestReplicationScenariosAcidTables
> --
>
> Key: HIVE-20121
> URL: https://issues.apache.org/jira/browse/HIVE-20121
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: mahesh kumar behera
>Priority: Major
> Attachments: HIVE-20121.01.patch
>
>
> [~djaiswal] have noticed that somehow ptest execution are getting more and 
> more problems lately... it seems to me that these problems are coming from 
> this test
> {code}
> 2018-07-08T22:07:33,461 DEBUG [main] metastore.HiveMetaStoreClient: Unable to 
> shutdown metastore client. Will try closing transport directly.
> org.apache.thrift.transport.TTransportException: Cannot write to null 
> outputStream
> {code}
> some links to more or less recent logs:
> http://104.198.109.242/logs/PreCommit-HIVE-Build-12481/failed/240_UTBatch_itests__hive-unit_9_tests/maven-test.txt
> the hive.log is ~200M:
> http://104.198.109.242/logs/PreCommit-HIVE-Build-12481/failed/240_UTBatch_itests__hive-unit_9_tests/logs/hive.log



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19927) Last Repl ID set by bootstrap dump is incorrect and may cause data loss if have ACID/MM tables.

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19927:

Status: Patch Available  (was: Open)

> Last Repl ID set by bootstrap dump is incorrect and may cause data loss if 
> have ACID/MM tables.
> ---
>
> Key: HIVE-19927
> URL: https://issues.apache.org/jira/browse/HIVE-19927
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.1.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Attachments: HIVE-19927.01.patch, HIVE-19927.02.patch
>
>
> During bootstrap dump of ACID tables, let's consider the below sequence.
> - Current session (REPL DUMP), Open txn (Txn1) - Event-10
> - Another session (Session-2), Open txn (Txn2) - Event-11
> - Session-2 -> Insert data (T1.D1) to ACID table. - Event-12
> - Get lastReplId = last event ID logged. (Event-12)
> - Session-2 -> Commit Txn (Txn2) - Event-13
> - Dump ACID tables based on validTxnList based on Txn1. --> This step skips 
> all the data written by txns > Txn1. So, T1.D1 will be missing.
> - Commit Txn (Txn1)
> - REPL LOAD from bootstrap dump will skip T1.D1.
> - Incremental REPL DUMP will start from Event-13 and hence lose Txn2 which is 
> opened after Txn1. So, data T1.D1 will be lost for ever.
> Proposed to capture the lastReplId of bootstrap before opening current txn 
> (Txn1) and store it in Driver context and use it for dump.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19927) Last Repl ID set by bootstrap dump is incorrect and may cause data loss if have ACID/MM tables.

2018-07-23 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552775#comment-16552775
 ] 

Sankar Hariappan commented on HIVE-19927:
-

Attached 02.patch with
 * Rebased with master
 * Bug fix where idempotent behaviour for create/drop functions which occur 
concurrently to bootstrap dump after fetching last repl id.
 * Set last repl ID in queryState con for each query overwriting old one.
 * Set last repl ID only if txn is opened.

Request [~maheshk114] to take a look!

> Last Repl ID set by bootstrap dump is incorrect and may cause data loss if 
> have ACID/MM tables.
> ---
>
> Key: HIVE-19927
> URL: https://issues.apache.org/jira/browse/HIVE-19927
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.1.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Attachments: HIVE-19927.01.patch, HIVE-19927.02.patch
>
>
> During bootstrap dump of ACID tables, let's consider the below sequence.
> - Current session (REPL DUMP), Open txn (Txn1) - Event-10
> - Another session (Session-2), Open txn (Txn2) - Event-11
> - Session-2 -> Insert data (T1.D1) to ACID table. - Event-12
> - Get lastReplId = last event ID logged. (Event-12)
> - Session-2 -> Commit Txn (Txn2) - Event-13
> - Dump ACID tables based on validTxnList based on Txn1. --> This step skips 
> all the data written by txns > Txn1. So, T1.D1 will be missing.
> - Commit Txn (Txn1)
> - REPL LOAD from bootstrap dump will skip T1.D1.
> - Incremental REPL DUMP will start from Event-13 and hence lose Txn2 which is 
> opened after Txn1. So, data T1.D1 will be lost for ever.
> Proposed to capture the lastReplId of bootstrap before opening current txn 
> (Txn1) and store it in Driver context and use it for dump.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19927) Last Repl ID set by bootstrap dump is incorrect and may cause data loss if have ACID/MM tables.

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19927:

Attachment: HIVE-19927.02.patch

> Last Repl ID set by bootstrap dump is incorrect and may cause data loss if 
> have ACID/MM tables.
> ---
>
> Key: HIVE-19927
> URL: https://issues.apache.org/jira/browse/HIVE-19927
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.1.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Attachments: HIVE-19927.01.patch, HIVE-19927.02.patch
>
>
> During bootstrap dump of ACID tables, let's consider the below sequence.
> - Current session (REPL DUMP), Open txn (Txn1) - Event-10
> - Another session (Session-2), Open txn (Txn2) - Event-11
> - Session-2 -> Insert data (T1.D1) to ACID table. - Event-12
> - Get lastReplId = last event ID logged. (Event-12)
> - Session-2 -> Commit Txn (Txn2) - Event-13
> - Dump ACID tables based on validTxnList based on Txn1. --> This step skips 
> all the data written by txns > Txn1. So, T1.D1 will be missing.
> - Commit Txn (Txn1)
> - REPL LOAD from bootstrap dump will skip T1.D1.
> - Incremental REPL DUMP will start from Event-13 and hence lose Txn2 which is 
> opened after Txn1. So, data T1.D1 will be lost for ever.
> Proposed to capture the lastReplId of bootstrap before opening current txn 
> (Txn1) and store it in Driver context and use it for dump.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19927) Last Repl ID set by bootstrap dump is incorrect and may cause data loss if have ACID/MM tables.

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19927:

Status: Open  (was: Patch Available)

> Last Repl ID set by bootstrap dump is incorrect and may cause data loss if 
> have ACID/MM tables.
> ---
>
> Key: HIVE-19927
> URL: https://issues.apache.org/jira/browse/HIVE-19927
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.1.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, pull-request-available, replication
> Attachments: HIVE-19927.01.patch, HIVE-19927.02.patch
>
>
> During bootstrap dump of ACID tables, let's consider the below sequence.
> - Current session (REPL DUMP), Open txn (Txn1) - Event-10
> - Another session (Session-2), Open txn (Txn2) - Event-11
> - Session-2 -> Insert data (T1.D1) to ACID table. - Event-12
> - Get lastReplId = last event ID logged. (Event-12)
> - Session-2 -> Commit Txn (Txn2) - Event-13
> - Dump ACID tables based on validTxnList based on Txn1. --> This step skips 
> all the data written by txns > Txn1. So, T1.D1 will be missing.
> - Commit Txn (Txn1)
> - REPL LOAD from bootstrap dump will skip T1.D1.
> - Incremental REPL DUMP will start from Event-13 and hence lose Txn2 which is 
> opened after Txn1. So, data T1.D1 will be lost for ever.
> Proposed to capture the lastReplId of bootstrap before opening current txn 
> (Txn1) and store it in Driver context and use it for dump.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20221) Increase column width for partition_params

2018-07-23 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-20221:
---


> Increase column width for partition_params
> --
>
> Key: HIVE-20221
> URL: https://issues.apache.org/jira/browse/HIVE-20221
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> HIVE-12274 have addressed almost all metastore columns; however it have left 
> out PARTITION_PARAMS; so in case of a partitioned tables the limits are still 
> there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20221) Increase column width for partition_params

2018-07-23 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20221:

Component/s: Metastore

> Increase column width for partition_params
> --
>
> Key: HIVE-20221
> URL: https://issues.apache.org/jira/browse/HIVE-20221
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> HIVE-12274 have addressed almost all metastore columns; however it have left 
> out PARTITION_PARAMS; so in case of a partitioned tables the limits are still 
> there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18192:

Comment: was deleted

(was: [~anishek]
[~ekoifman] have written the design to introduce table Write ID.

Cc [~alangates])

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20180) Backport HIVE-19759 to branch-3

2018-07-23 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20180:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

pushed to branch-3. Thank you [~dvoros]!

> Backport HIVE-19759 to branch-3
> ---
>
> Key: HIVE-20180
> URL: https://issues.apache.org/jira/browse/HIVE-20180
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Affects Versions: 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-20180.1.branch-3.patch, HIVE-20180.2-branch-3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20182) Backport HIVE-20067 to branch-3

2018-07-23 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20182:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

pushed to branch-3. Thank you [~dvoros]!

> Backport HIVE-20067 to branch-3
> ---
>
> Key: HIVE-20182
> URL: https://issues.apache.org/jira/browse/HIVE-20182
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-20182.1.branch-3.patch, HIVE-20182.2-branch-3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20181) Backport HIVE-20045 to branch-3

2018-07-23 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20181:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

pushed to branch-3. Thank you [~dvoros]!

> Backport HIVE-20045 to branch-3
> ---
>
> Key: HIVE-20181
> URL: https://issues.apache.org/jira/browse/HIVE-20181
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-20181.1.branch-3.patch, HIVE-20181.2-branch-3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18279) Incorrect condition in StatsOpimizer

2018-07-23 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-18279:

Resolution: Invalid
Status: Resolved  (was: Patch Available)

as my earlier comment...I don't think this is not an incorrect condition.
please submit a usecase which is producing incorrect results because of this

> Incorrect condition in StatsOpimizer
> 
>
> Key: HIVE-18279
> URL: https://issues.apache.org/jira/browse/HIVE-18279
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HIVE-18279.1.patch
>
>
> At the moment {{StatsOpimizer}} has code
> {code}
> if (rowCnt == null) {
>   // if rowCnt < 1 than its either empty table or table on which 
> stats are not
>   //  computed We assume the worse and don't attempt to optimize.
>   Logger.debug("Table doesn't have up to date stats " + 
> tbl.getTableName());
>   rowCnt = null;
> }
> {code}
> in method {{private Long getRowCnt()}}. Condition 
> {code}
> if (rowCnt == null) {
> {code}
> should be changed to 
> {code}
> if (rowCnt == null || rowCnt == 0) {
> {code}
> because 0 value also means that table stats may not be computed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20184) Backport HIVE-20085 to branch-3

2018-07-23 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552699#comment-16552699
 ] 

Zoltan Haindrich commented on HIVE-20184:
-

something might be missing... EXTERNAL_TABLE_PURGE is unknown..is patch run 
against branch-3?

> Backport HIVE-20085 to branch-3
> ---
>
> Key: HIVE-20184
> URL: https://issues.apache.org/jira/browse/HIVE-20184
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-20184.1.branch-3.patch, HIVE-20184.2-branch-3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20185) Backport HIVE-20111 to branch-3

2018-07-23 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552694#comment-16552694
 ] 

Zoltan Haindrich commented on HIVE-20185:
-

the latest testrun seems to fail because of MetaStoreUtils.isExternalTablePurge 
seems to be missing - possibly needs another ticket?

> Backport HIVE-20111 to branch-3
> ---
>
> Key: HIVE-20185
> URL: https://issues.apache.org/jira/browse/HIVE-20185
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-20185.1.branch-3.patch, HIVE-20185.2-branch-3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552692#comment-16552692
 ] 

Sankar Hariappan commented on HIVE-20192:
-

Test failures are irrelevant to this patch and also fails for some other builds 
too.

01-branch-3.patch is committed to branch-3.

> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20192.01-branch-3.patch, HIVE-20192.01.patch, 
> HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-20192:

Fix Version/s: 3.2.0

> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20192.01-branch-3.patch, HIVE-20192.01.patch, 
> HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-20192:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20192.01-branch-3.patch, HIVE-20192.01.patch, 
> HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19267) Replicate ACID/MM tables write operations.

2018-07-23 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552688#comment-16552688
 ] 

Sankar Hariappan commented on HIVE-19267:
-

[~maheshk114],

02-branch-3.patch have additional changes compared to 22.patch.
 # TOPNKEY in thrift generated files which is added by branch-3 patch.
 # - List newFiles = Collections.synchronizedList(new ArrayList());
+ List newFiles = null;

Please take a look at this.

> Replicate ACID/MM tables write operations.
> --
>
> Key: HIVE-19267
> URL: https://issues.apache.org/jira/browse/HIVE-19267
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Fix For: 4.0.0
>
> Attachments: HIVE-19267.01-branch-3.patch, HIVE-19267.01.patch, 
> HIVE-19267.02-branch-3.patch, HIVE-19267.02.patch, HIVE-19267.03.patch, 
> HIVE-19267.04.patch, HIVE-19267.05.patch, HIVE-19267.06.patch, 
> HIVE-19267.07.patch, HIVE-19267.08.patch, HIVE-19267.09.patch, 
> HIVE-19267.10.patch, HIVE-19267.11.patch, HIVE-19267.12.patch, 
> HIVE-19267.13.patch, HIVE-19267.14.patch, HIVE-19267.15.patch, 
> HIVE-19267.16.patch, HIVE-19267.17.patch, HIVE-19267.18.patch, 
> HIVE-19267.19.patch, HIVE-19267.20.patch, HIVE-19267.21.patch, 
> HIVE-19267.22.patch
>
>
>  
> h1. Replicate ACID write Events
>  * Create new EVENT_WRITE event with related message format to log the write 
> operations with in a txn along with data associated.
>  * Log this event when perform any writes (insert into, insert overwrite, 
> load table, delete, update, merge, truncate) on table/partition.
>  * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple 
> partitions, then need to log one event per partition.
>  * DbNotificationListener should log this type of event to special metastore 
> table named "MTxnWriteNotificationLog".
>  * This table should maintain a map of txn ID against list of 
> tables/partitions written by given txn.
>  * The entry for a given txn should be removed by the cleaner thread that 
> removes the expired events from EventNotificationTable.
> h1. Replicate Commit Txn operation (with writes)
> Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions 
> modified within the txn.
> *Source warehouse:*
>  * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" 
> metastore table to consolidate the list of tables/partitions modified within 
> this txn scope.
>  * Based on the list of tables/partitions modified and table Write ID, need 
> to compute the list of delta files added by this txn.
>  * Repl dump should read this message and dump the metadata and delta files 
> list.
> *Target warehouse:*
>  * Ensure snapshot isolation at target for on-going read txns which shouldn't 
> view the data replicated from committed txn. (Ensured with open and allocate 
> write ID events).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19636) Fix druidmini_dynamic_partition.q slowness

2018-07-23 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552687#comment-16552687
 ] 

Nishant Bangarwa commented on HIVE-19636:
-

[~vgarg] looks like the broker node might have crashed/died due to some reason 
and hive is getting connection refused on the druid broker port. Please check 
broker logs for any exceptions. 


> Fix druidmini_dynamic_partition.q slowness
> --
>
> Key: HIVE-19636
> URL: https://issues.apache.org/jira/browse/HIVE-19636
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Prasanth Jayachandran
>Priority: Major
> Attachments: hive.12762.logs.log
>
>
> druidmini_dynamic_partition.q runs for >5 mins



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552663#comment-16552663
 ] 

Hive QA commented on HIVE-20192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932678/HIVE-20192.01-branch-3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 97 failed/errored test(s), 14404 tests 
executed
*Failed tests:*
{noformat}
TestAddPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestAddPartitionsFromPartSpec - did not produce a TEST-*.xml file (likely timed 
out) (batchId=228)
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
TestAggregateStatsCache - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestAlterPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestAppendPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=271)
TestCachedStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
TestCatalogCaching - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
TestCatalogNonDefaultClient - did not produce a TEST-*.xml file (likely timed 
out) (batchId=226)
TestCatalogNonDefaultSvr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
TestCatalogOldClient - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestCatalogs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestCheckConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=236)
TestDatabases - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDeadline - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
TestDefaultConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDropPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=271)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=229)
TestExchangePartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestFMSketchSerialization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=236)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestForeignKey - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestFunctions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestGetPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestGetTableMeta - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestHLLNoBias - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHLLSerialization - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHdfsUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
TestHiveAlterHandler - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestHiveMetaStoreGetMetaConf - did not produce a TEST-*.xml file (likely timed 
out) (batchId=234)
TestHiveMetaStorePartitionSpecs - did not produce a TEST-*.xml file (likely 
timed out) (batchId=228)
TestHiveMetaStoreSchemaMethods - did not produce a TEST-*.xml file (likely 
timed out) (batchId=234)
TestHiveMetaStoreTimeout - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=231)
TestHiveMetastoreCli - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestHyperLogLog - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHyperLogLogDense - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHyperLogLogMerge - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHyperLogLogSparse - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestJSONMessageDeserializer - did not produce a TEST-*.xml file (likely timed 
out) (batchId=234)
TestListPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestLockRequestBuilder - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestMarkPartition - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
TestMarkPartitionRemote - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestMetaStoreConnectionUrlHook - did not produce a TEST-*.xml file 

[jira] [Commented] (HIVE-20220) Incorrect result when hive.groupby.skewindata is enabled

2018-07-23 Thread Ganesha Shreedhara (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552606#comment-16552606
 ] 

Ganesha Shreedhara commented on HIVE-20220:
---

I'll correct the golden files if this fix is feasible. 

> Incorrect result when hive.groupby.skewindata is enabled
> 
>
> Key: HIVE-20220
> URL: https://issues.apache.org/jira/browse/HIVE-20220
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-20220.patch
>
>
> hive.groupby.skewindata makes use of rand UDF to randomly distribute grouped 
> by keys to the reducers and hence avoids overloading a single reducer when 
> there is a skew in data. 
> This random distribution of keys is buggy when the reducer fails to fetch the 
> mapper output due to a faulty datanode or any other reason. When reducer 
> finds that it can't fetch mapper output, it sends a signal to Application 
> Master to reattempt the corresponding map task. The reattempted map task will 
> now get the different random value from rand function and hence the keys that 
> gets distributed now to the reducer will not be same as the previous run. 
>  
> *Steps to reproduce:*
> create table test(id int);
> insert into test values 
> (1),(2),(2),(3),(3),(3),(4),(4),(4),(4),(5),(5),(5),(5),(5),(6),(6),(6),(6),(6),(6),(7),(7),(7),(7),(7),(7),(7),(7),(8),(8),(8),(8),(8),(8),(8),(8),(9),(9),(9),(9),(9),(9),(9),(9),(9);
> SET hive.groupby.skewindata=true;
> SET mapreduce.reduce.reduces=2;
> //Add a debug port for reducer
> select count(1) from test group by id;
> //Remove mapper's intermediate output file when map stage is completed and 
> one out of 2 reduce tasks is completed and then continue the run. This causes 
> 2nd reducer to send event to Application Master to rerun the map task. 
> The following is the expected result. 
> 1
> 2
> 3
> 4
> 5
> 6
> 8
> 8
> 9 
>  
> But you may get different result due to a different value returned by the 
> rand function in the second run causing different distribution of keys.
> This needs to be fixed such that the mapper distributes the same keys even if 
> it is reattempted multiple times. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552569#comment-16552569
 ] 

Hive QA commented on HIVE-20192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 15s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-12790/patches/PreCommit-HIVE-Build-12790.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12790/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20192.01-branch-3.patch, HIVE-20192.01.patch, 
> HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-20192:

Status: Open  (was: Patch Available)

> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20192.01-branch-3.patch, HIVE-20192.01.patch, 
> HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-20192:

Status: Patch Available  (was: Open)

Attached patch for branch-3.

> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20192.01-branch-3.patch, HIVE-20192.01.patch, 
> HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-20192:

Attachment: HIVE-20192.01-branch-3.patch

> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20192.01-branch-3.patch, HIVE-20192.01.patch, 
> HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20082) HiveDecimal to string conversion doesn't format the decimal correctly - master

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552529#comment-16552529
 ] 

Hive QA commented on HIVE-20082:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12932652/HIVE-20082.3.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14682 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/12789/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12789/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12789/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12932652 - PreCommit-HIVE-Build

> HiveDecimal to string conversion doesn't format the decimal correctly - master
> --
>
> Key: HIVE-20082
> URL: https://issues.apache.org/jira/browse/HIVE-20082
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-20082.1.patch, HIVE-20082.2.patch, 
> HIVE-20082.3.patch
>
>
> Example: LPAD on a decimal(7,1) values of 0 returns "0" (plus padding) but it 
> should be "0.0" (plus padding)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552501#comment-16552501
 ] 

ASF GitHub Bot commented on HIVE-20192:
---

Github user sankarh closed the pull request at:

https://github.com/apache/hive/pull/402


> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20192.01.patch, HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-20192:

Target Version/s: 4.0.0, 3.2.0  (was: 3.1.0, 4.0.0, 3.2.0)

> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20192.01.patch, HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20192) HS2 with embedded metastore is leaking JDOPersistenceManager objects.

2018-07-23 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552500#comment-16552500
 ] 

Sankar Hariappan commented on HIVE-20192:
-

02.patch is committed to master!

> HS2 with embedded metastore is leaking JDOPersistenceManager objects.
> -
>
> Key: HIVE-20192
> URL: https://issues.apache.org/jira/browse/HIVE-20192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: HiveServer2, pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-20192.01.patch, HIVE-20192.02.patch
>
>
> Hiveserver2 instances where crashing every 3-4 days and observed HS2 in on 
> unresponsive state. Also, observed that the FGC collection happening regularly
> From JXray report it is seen that pmCache(List of JDOPersistenceManager 
> objects) is occupying 84% of the heap and there are around 16,000 references 
> of UDFClassLoader.
> {code:java}
> 10,759,230K (84.7%) Object tree for GC root(s) Java Static 
> org.apache.hadoop.hive.metastore.ObjectStore.pmf
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.pmCache ↘ 10,744,419K 
> (84.6%), 1 reference(s)
>   - j.u.Collections$SetFromMap.m ↘ 10,744,419K (84.6%), 1 reference(s)
> - {java.util.concurrent.ConcurrentHashMap}.keys ↘ 10,743,764K (84.5%), 
> 16,872 reference(s)
>   - org.datanucleus.api.jdo.JDOPersistenceManager.ec ↘ 10,738,831K 
> (84.5%), 16,872 reference(s)
> ... 3 more references together retaining 4,933K (< 0.1%)
> - java.util.concurrent.ConcurrentHashMap self 655K (< 0.1%), 1 object(s)
>   ... 2 more references together retaining 48b (< 0.1%)
> - org.datanucleus.api.jdo.JDOPersistenceManagerFactory.nucleusContext ↘ 
> 14,810K (0.1%), 1 reference(s)
> ... 3 more references together retaining 96b (< 0.1%){code}
> When the RawStore object is re-created, it is not allowed to be updated into 
> the ThreadWithGarbageCleanup.threadRawStoreMap which leads to the new 
> RawStore never gets cleaned-up when the thread exit.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20082) HiveDecimal to string conversion doesn't format the decimal correctly - master

2018-07-23 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16552495#comment-16552495
 ] 

Hive QA commented on HIVE-20082:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} serde in master has 195 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} accumulo-handler in master has 21 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
0s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} accumulo-handler in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} serde: The patch generated 4 new + 299 unchanged - 0 
fixed = 303 total (was 299) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} accumulo-handler: The patch generated 1 new + 53 
unchanged - 0 fixed = 54 total (was 53) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
46s{color} | {color:red} ql: The patch generated 16 new + 898 unchanged - 3 
fixed = 914 total (was 901) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-12789/dev-support/hive-personality.sh
 |
| git revision | master / 170a012 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12789/yetus/patch-mvninstall-accumulo-handler.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12789/yetus/diff-checkstyle-serde.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12789/yetus/diff-checkstyle-accumulo-handler.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12789/yetus/diff-checkstyle-ql.txt
 |
| modules | C: serde accumulo-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-12789/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HiveDecimal to string conversion doesn't format the decimal correctly - master
> --
>
>   

<    1   2   3   >