[jira] [Commented] (HIVE-23016) Extract JdbcConnectionParams from Utils Class

2020-03-18 Thread Anirban Ghosh (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062323#comment-17062323
 ] 

Anirban Ghosh commented on HIVE-23016:
--

Hi! I'd like to take this up if no one else is working on this already.

> Extract JdbcConnectionParams from Utils Class
> -
>
> Key: HIVE-23016
> URL: https://issues.apache.org/jira/browse/HIVE-23016
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Priority: Minor
>  Labels: n00b, newbie, noob
>
> And make it its own class.
> https://github.com/apache/hive/blob/4700e210ef7945278c4eb313c9ebd810b0224da1/jdbc/src/java/org/apache/hive/jdbc/Utils.java#L72



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062319#comment-17062319
 ] 

Hive QA commented on HIVE-23032:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997072/HIVE-23032.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18123 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query21]
 (batchId=306)
org.apache.hadoop.hive.metastore.security.TestHadoopAuthBridge23.testSaslWithHiveMetaStore
 (batchId=299)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21172/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21172/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21172/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997072 - PreCommit-HIVE-Build

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch, HIVE-23032.2.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-18 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Status: Open  (was: Patch Available)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.10.patch, 
> HIVE-23004.11.patch, HIVE-23004.2.patch, HIVE-23004.4.patch, 
> HIVE-23004.6.patch, HIVE-23004.7.patch, HIVE-23004.8.patch, HIVE-23004.9.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23004) Support Decimal64 operations across multiple vertices

2020-03-18 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-23004:

Attachment: HIVE-23004.11.patch
Status: Patch Available  (was: Open)

> Support Decimal64 operations across multiple vertices
> -
>
> Key: HIVE-23004
> URL: https://issues.apache.org/jira/browse/HIVE-23004
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-23004.1.patch, HIVE-23004.10.patch, 
> HIVE-23004.11.patch, HIVE-23004.2.patch, HIVE-23004.4.patch, 
> HIVE-23004.6.patch, HIVE-23004.7.patch, HIVE-23004.8.patch, HIVE-23004.9.patch
>
>
> Support Decimal64 operations across multiple vertices



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062294#comment-17062294
 ] 

Hive QA commented on HIVE-23032:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
14s{color} | {color:blue} standalone-metastore/metastore-server in master has 
186 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
23s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 13 new + 570 unchanged - 15 fixed = 583 total (was 585) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21172/dev-support/hive-personality.sh
 |
| git revision | master / 6f9ae63 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21172/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21172/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch, HIVE-23032.2.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23051) Clean up BucketCodec

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062289#comment-17062289
 ] 

Hive QA commented on HIVE-23051:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997067/HIVE-23051.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 80 failed/errored test(s), 18123 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.io.orc.TestOrcRawRecordMerger.testNewBaseAndDelta 
(batchId=333)
org.apache.hadoop.hive.ql.io.orc.TestOrcRawRecordMerger.testRecordReaderIncompleteDelta
 (batchId=333)
org.apache.hadoop.hive.ql.parse.TestReplicationOfHiveStreaming.testHiveStreamingDynamicPartitionWithTxnBatchSizeAsOne
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestReplicationOfHiveStreaming.testHiveStreamingStaticPartitionWithTxnBatchSizeAsOne
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestReplicationOfHiveStreaming.testHiveStreamingUnpartitionedWithTxnBatchSizeAsOne
 (batchId=256)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactAfterAbort 
(batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactAfterAbortNew 
(batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactWhileStreaming
 (batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactWhileStreamingForSplitUpdate
 (batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.majorCompactWhileStreamingForSplitUpdateNew
 (batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactAfterAbort 
(batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactAfterAbortNew 
(batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactWhileStreaming
 (batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactWhileStreamingWithSplitUpdate
 (batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.minorCompactWhileStreamingWithSplitUpdateNew
 (batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.testStatsAfterCompactionPartTbl
 (batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.testStatsAfterCompactionPartTblNew
 (batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCrudCompactorOnTez.testMinorCompactionWhileStreaming
 (batchId=270)
org.apache.hadoop.hive.ql.txn.compactor.TestCrudCompactorOnTez.testMinorCompactionWhileStreamingAfterAbort
 (batchId=270)
org.apache.hadoop.hive.ql.txn.compactor.TestCrudCompactorOnTez.testMinorCompactionWhileStreamingWithAbort
 (batchId=270)
org.apache.hadoop.hive.ql.txn.compactor.TestCrudCompactorOnTez.testMinorCompactionWhileStreamingWithAbortInMiddle
 (batchId=270)
org.apache.hadoop.hive.ql.txn.compactor.TestCrudCompactorOnTez.testMinorCompactionWhileStreamingWithSplitUpdate
 (batchId=270)
org.apache.hive.hcatalog.streaming.TestStreaming.testBucketing (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testBucketingWhereBucketColIsNotFirstCol
 (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testConcurrentTransactionBatchCommits
 (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testErrorHandling (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testFileDump (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testFileDumpCorruptDataFiles 
(batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testFileDumpCorruptSideFiles 
(batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testInterleavedTransactionBatchCommits
 (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testMultipleTransactionBatchCommits
 (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testNoBuckets (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testRemainingTransactions 
(batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchAbort 
(batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchAbortAndCommit
 (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Delimited
 (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_DelimitedUGI
 (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Json
 (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_Regex
 (batchId=226)
org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchCommit_RegexUGI
 (batchId=226)
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testMulti (batchId=226)
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchAbort
 (batchId=226)
org.apache.hive.hcatalog.streaming.mutate.TestMutations.testTransactionBatchCommitPartitioned
 (batchId=226)
org.apache.hive.hcatalog.streaming.mutate.TestMut

[jira] [Commented] (HIVE-23002) Optimise LazyBinaryUtils.writeVLong

2020-03-18 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062278#comment-17062278
 ] 

Ashutosh Chauhan commented on HIVE-23002:
-

ok.. thanks for clarification. +1

> Optimise LazyBinaryUtils.writeVLong
> ---
>
> Key: HIVE-23002
> URL: https://issues.apache.org/jira/browse/HIVE-23002
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-23002.1.patch, HIVE-23002.2.patch, 
> HIVE-23002.3.patch, Screenshot 2020-03-10 at 5.01.34 AM.jpg
>
>
> [https://github.com/apache/hive/blob/master/serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryUtils.java#L420]
> It would be good to add a method which accepts scratch bytes.
>  
>   !Screenshot 2020-03-10 at 5.01.34 AM.jpg|width=452,height=321!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-23002) Optimise LazyBinaryUtils.writeVLong

2020-03-18 Thread Rajesh Balamohan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062275#comment-17062275
 ] 

Rajesh Balamohan edited comment on HIVE-23002 at 3/19/20, 4:36 AM:
---

No, it will not be for every column value & not for every row. This scratch 
bytes is allocated & reused per operator.

 

e.g: 
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/reducesink/VectorReduceSinkCommonOperator.java#L113


was (Author: rajesh.balamohan):
No, it will not be for every column value & not for every row. This scratch 
bytes is allocated & reused per operator.

> Optimise LazyBinaryUtils.writeVLong
> ---
>
> Key: HIVE-23002
> URL: https://issues.apache.org/jira/browse/HIVE-23002
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-23002.1.patch, HIVE-23002.2.patch, 
> HIVE-23002.3.patch, Screenshot 2020-03-10 at 5.01.34 AM.jpg
>
>
> [https://github.com/apache/hive/blob/master/serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryUtils.java#L420]
> It would be good to add a method which accepts scratch bytes.
>  
>   !Screenshot 2020-03-10 at 5.01.34 AM.jpg|width=452,height=321!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22995) Add support for location for managed tables on database

2020-03-18 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22995:
-
Status: Patch Available  (was: Open)

Some test failures and refactoring.

> Add support for location for managed tables on database
> ---
>
> Key: HIVE-22995
> URL: https://issues.apache.org/jira/browse/HIVE-22995
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22995.1.patch, HIVE-22995.2.patch, 
> HIVE-22995.3.patch, Hive Metastore Support for Tenant-based storage 
> heirarchy.pdf
>
>
> I have attached the initial spec to this jira.
> Default location for database would be the external table base directory. 
> Managed location can be optionally specified.
> {code}
> CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
>   [COMMENT database_comment]
>   [LOCATION hdfs_path]
> [MANAGEDLOCATION hdfs_path]
>   [WITH DBPROPERTIES (property_name=property_value, ...)];
> ALTER (DATABASE|SCHEMA) database_name SET 
> MANAGEDLOCATION
>  hdfs_path;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22995) Add support for location for managed tables on database

2020-03-18 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22995:
-
Attachment: HIVE-22995.3.patch

> Add support for location for managed tables on database
> ---
>
> Key: HIVE-22995
> URL: https://issues.apache.org/jira/browse/HIVE-22995
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22995.1.patch, HIVE-22995.2.patch, 
> HIVE-22995.3.patch, Hive Metastore Support for Tenant-based storage 
> heirarchy.pdf
>
>
> I have attached the initial spec to this jira.
> Default location for database would be the external table base directory. 
> Managed location can be optionally specified.
> {code}
> CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
>   [COMMENT database_comment]
>   [LOCATION hdfs_path]
> [MANAGEDLOCATION hdfs_path]
>   [WITH DBPROPERTIES (property_name=property_value, ...)];
> ALTER (DATABASE|SCHEMA) database_name SET 
> MANAGEDLOCATION
>  hdfs_path;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22995) Add support for location for managed tables on database

2020-03-18 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-22995:
-
Status: Open  (was: Patch Available)

> Add support for location for managed tables on database
> ---
>
> Key: HIVE-22995
> URL: https://issues.apache.org/jira/browse/HIVE-22995
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22995.1.patch, HIVE-22995.2.patch, Hive Metastore 
> Support for Tenant-based storage heirarchy.pdf
>
>
> I have attached the initial spec to this jira.
> Default location for database would be the external table base directory. 
> Managed location can be optionally specified.
> {code}
> CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
>   [COMMENT database_comment]
>   [LOCATION hdfs_path]
> [MANAGEDLOCATION hdfs_path]
>   [WITH DBPROPERTIES (property_name=property_value, ...)];
> ALTER (DATABASE|SCHEMA) database_name SET 
> MANAGEDLOCATION
>  hdfs_path;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23002) Optimise LazyBinaryUtils.writeVLong

2020-03-18 Thread Rajesh Balamohan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062275#comment-17062275
 ] 

Rajesh Balamohan commented on HIVE-23002:
-

No, it will not be for every column value & not for every row. This scratch 
bytes is allocated & reused per operator.

> Optimise LazyBinaryUtils.writeVLong
> ---
>
> Key: HIVE-23002
> URL: https://issues.apache.org/jira/browse/HIVE-23002
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-23002.1.patch, HIVE-23002.2.patch, 
> HIVE-23002.3.patch, Screenshot 2020-03-10 at 5.01.34 AM.jpg
>
>
> [https://github.com/apache/hive/blob/master/serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryUtils.java#L420]
> It would be good to add a method which accepts scratch bytes.
>  
>   !Screenshot 2020-03-10 at 5.01.34 AM.jpg|width=452,height=321!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23051) Clean up BucketCodec

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062272#comment-17062272
 ] 

Hive QA commented on HIVE-23051:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} ql: The patch generated 0 new + 1 unchanged - 7 
fixed = 1 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21171/dev-support/hive-personality.sh
 |
| git revision | master / 6f9ae63 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21171/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Clean up BucketCodec
> 
>
> Key: HIVE-23051
> URL: https://issues.apache.org/jira/browse/HIVE-23051
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HIVE-23051.1.patch
>
>
> A couple of nagging things caught my eye with this class.  The first thing:
> {code:java|title=BucketCodec.java}
>   int statementId = options.getStatementId() >= 0 ? 
> options.getStatementId() : 0;
>   assert this.version >=0 && this.version <= MAX_VERSION
> : "Version out of range: " + version;
>   if(!(options.getBucketId() >= 0 && options.getBucketId() <= 
> MAX_BUCKET_ID)) {
> throw new IllegalArgumentException("bucketId out of range: " + 
> options.getBucketId());
>   }
>   if(!(statementId >= 0 && statementId <= MAX_STATEMENT_ID)) {
> throw new IllegalArgumentException("statementId out of range: " + 
> statementId);
>   }
> {code}
> {{statementId}} gets capped, if it's less than 0, then it gets rounded up to 
> 0.  However, it late checks that the {{statementId}} is greater,... which is 
> will always be since it's getting rounded.  
> # Remove the rounding behavior.
> # Make better error message
> # Fail-fast in the constructor if the version is invalid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23002) Optimise LazyBinaryUtils.writeVLong

2020-03-18 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062270#comment-17062270
 ] 

Ashutosh Chauhan commented on HIVE-23002:
-

[~rajesh.balamohan] I don't follow your response to my question:
bq. per LazyBinarySerializeWrite; reused.
I am still confused if we are going to allocate a new byte array for every 
column value.

> Optimise LazyBinaryUtils.writeVLong
> ---
>
> Key: HIVE-23002
> URL: https://issues.apache.org/jira/browse/HIVE-23002
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-23002.1.patch, HIVE-23002.2.patch, 
> HIVE-23002.3.patch, Screenshot 2020-03-10 at 5.01.34 AM.jpg
>
>
> [https://github.com/apache/hive/blob/master/serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryUtils.java#L420]
> It would be good to add a method which accepts scratch bytes.
>  
>   !Screenshot 2020-03-10 at 5.01.34 AM.jpg|width=452,height=321!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062258#comment-17062258
 ] 

Hive QA commented on HIVE-23049:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997062/HIVE-23049.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18118 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence
 (batchId=252)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21170/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21170/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21170/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997062 - PreCommit-HIVE-Build

> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch, HIVE-23049.02.patch
>
>
> Running with Oracle19 the parameters present to a query in datanucleus are 
> not boxed, thus it must be explicitly set to Long and not long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23034) Arrow serializer should not keep the reference of arrow offset and validity buffers

2020-03-18 Thread mahesh kumar behera (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-23034:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[^HIVE-23034.01.patch] committed to master. Thanks [~ShubhamChaurasia] for 
fixing it and [~thejas] for review.

> Arrow serializer should not keep the reference of arrow offset and validity 
> buffers
> ---
>
> Key: HIVE-23034
> URL: https://issues.apache.org/jira/browse/HIVE-23034
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Serializers/Deserializers
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23034.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, a part of writeList() method in arrow serializer is implemented 
> like - 
> {code:java}
> final ArrowBuf offsetBuffer = arrowVector.getOffsetBuffer();
> int nextOffset = 0;
> for (int rowIndex = 0; rowIndex < size; rowIndex++) {
>   int selectedIndex = rowIndex;
>   if (vectorizedRowBatch.selectedInUse) {
> selectedIndex = vectorizedRowBatch.selected[rowIndex];
>   }
>   if (hiveVector.isNull[selectedIndex]) {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
>   } else {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
> nextOffset += (int) hiveVector.lengths[selectedIndex];
> arrowVector.setNotNull(rowIndex);
>   }
> }
> offsetBuffer.setInt(size * OFFSET_WIDTH, nextOffset);
> {code}
> 1) Here we obtain a reference to {{final ArrowBuf offsetBuffer = 
> arrowVector.getOffsetBuffer();}} and keep updating the arrow vector and 
> offset vector. 
> Problem - 
> {{arrowVector.setNotNull(rowIndex)}} keeps checking the index and reallocates 
> the offset and validity buffers when a threshold is crossed, updates the 
> references internally and also releases the old buffers (which decrements the 
> buffer reference count). Now the reference which we obtained in 1) becomes 
> obsolete. Furthermore if try to read or write old buffer, we see - 
> {code:java}
> Caused by: io.netty.util.IllegalReferenceCountException: refCnt: 0
>   at 
> io.netty.buffer.AbstractByteBuf.ensureAccessible(AbstractByteBuf.java:1413)
>   at io.netty.buffer.ArrowBuf.checkIndexD(ArrowBuf.java:131)
>   at io.netty.buffer.ArrowBuf.chk(ArrowBuf.java:162)
>   at io.netty.buffer.ArrowBuf.setInt(ArrowBuf.java:656)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:432)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeStruct(Serializer.java:352)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:288)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:419)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.serializeBatch(Serializer.java:205)
> {code}
>  
> Solution - 
> This can be fixed by getting the buffers each time ( 
> {{arrowVector.getOffsetBuffer()}} ) we want to update them. 
> In our internal tests, this is very frequently seen on arrow 0.8.0 but not on 
> 0.10.0 but should be handled the same way for 0.10.0 too as it does the same 
> thing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062245#comment-17062245
 ] 

Hive QA commented on HIVE-23049:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
21s{color} | {color:blue} standalone-metastore/metastore-server in master has 
186 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21170/dev-support/hive-personality.sh
 |
| git revision | master / 2c5a109 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21170/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch, HIVE-23049.02.patch
>
>
> Running with Oracle19 the parameters present to a query in datanucleus are 
> not boxed, thus it must be explicitly set to Long and not long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22402) Deprecate and Replace Hive PerfLogger

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062229#comment-17062229
 ] 

Hive QA commented on HIVE-22402:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12984201/HIVE-22402.4.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21169/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21169/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21169/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2020-03-19 02:06:51.634
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-21169/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2020-03-19 02:06:51.636
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 2c5a109 HIVE-22955 PreUpgradeTool can fail because access to 
CharsetDecoder is not synchronized (Gergely Hanko, reviewed by Miklos Gergely)
+ git clean -f -d
Removing ${project.basedir}/
Removing itests/${project.basedir}/
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 2c5a109 HIVE-22955 PreUpgradeTool can fail because access to 
CharsetDecoder is not synchronized (Gergely Hanko, reviewed by Miklos Gergely)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2020-03-19 02:06:52.567
+ rm -rf ../yetus_PreCommit-HIVE-Build-21169
+ mkdir ../yetus_PreCommit-HIVE-Build-21169
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-21169
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-21169/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Trying to apply the patch with -p0
error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not 
exist in index
error: a/common/src/java/org/apache/hadoop/hive/ql/log/PerfLogger.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java: does not exist in 
index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java: does not 
exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/SerializationUtilities.java: does 
not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/SparkHashTableSinkOperator.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java: does not 
exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkDynamicPartitionPruner.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkMapRecordHandler.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlan.java: does 
not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java: 
does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkRecordHandler.java: 
does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkReduceRecordHandler.java:
 does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java: does 
not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MapRecordProcessor.java: does 
not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/MergeFileRecordProcessor.java: 
does not exist in index
error: a

[jira] [Commented] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062228#comment-17062228
 ] 

Hive QA commented on HIVE-23045:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997055/HIVE-23045.3.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18123 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21168/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21168/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21168/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997055 - PreCommit-HIVE-Build

> Zookeeper SSL/TLS support
> -
>
> Key: HIVE-23045
> URL: https://issues.apache.org/jira/browse/HIVE-23045
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC, Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Critical
> Attachments: HIVE-23045.1.patch, HIVE-23045.2.patch, 
> HIVE-23045.3.patch
>
>
> Zookeeper 3.5.5 server can operate with SSL/TLS secure connection with its 
> clients.
> [https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide]
> The SSL communication should be possible in the different part of HIVE, where 
> it communicates with Zookeeper servers. The Zookeeper clients are used in the 
> following places:
>  * HiveServer2 PrivilegeSynchronizer
>  * HiveServer2 register/remove server from Zookeeper
>  * HS2ActivePassiveHARegistryClient
>  * ZooKeeperHiveLockManager
>  * LLapZookeeperRegistryImpl
>  * TezAmRegistryImpl
>  * WebHCat ZooKeeperStorage
>  * JDBC Driver server lookup
>  * Metastore - ZookeeperTokenStore
>  * Metastore register/remove server from Zookeeper
> The flag to enable SSL communication and the required parameters should be 
> provided by different configuration parameters, corresponding the different 
> use cases. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22997:
--
Attachment: HIVE-22997.15.patch

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.13.patch, HIVE-22997.14.patch, 
> HIVE-22997.15.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-23032:
--
Attachment: HIVE-23032.2.patch

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch, HIVE-23032.2.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22841) ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner IllegalArgumentException on invalid cookie signature

2020-03-18 Thread Thejas Nair (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062182#comment-17062182
 ] 

Thejas Nair commented on HIVE-22841:


Thanks [~krisden] [~kgyrtkirk]!

> ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner 
> IllegalArgumentException on invalid cookie signature
> -
>
> Key: HIVE-22841
> URL: https://issues.apache.org/jira/browse/HIVE-22841
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22841.1.patch, HIVE-22841.2.patch, 
> HIVE-22841.3.patch
>
>
> Currently CookieSigner throws an IllegalArgumentException if the cookie 
> signature is invalid. 
> {code:java}
> if (!MessageDigest.isEqual(originalSignature.getBytes(), 
> currentSignature.getBytes())) {
>   throw new IllegalArgumentException("Invalid sign, original = " + 
> originalSignature +
> " current = " + currentSignature);
> }
> {code}
> CookieSigner is only used in the ThriftHttpServlet#getClientNameFromCookie 
> and doesn't handle the IllegalArgumentException. It is only checking if the 
> value from the cookie is null or not.
> https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java#L295
> {code:java}
>   currValue = signer.verifyAndExtract(currValue);
>   // Retrieve the user name, do the final validation step.
>   if (currValue != null) {
> {code}
> This should be fixed to either:
> a) Have CookieSigner not return an IllegalArgumentException
> b) Improve ThriftHttpServlet to handle CookieSigner throwing an 
> IllegalArgumentException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062179#comment-17062179
 ] 

Hive QA commented on HIVE-23045:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 19s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-21168/patches/PreCommit-HIVE-Build-21168.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21168/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Zookeeper SSL/TLS support
> -
>
> Key: HIVE-23045
> URL: https://issues.apache.org/jira/browse/HIVE-23045
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC, Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Critical
> Attachments: HIVE-23045.1.patch, HIVE-23045.2.patch, 
> HIVE-23045.3.patch
>
>
> Zookeeper 3.5.5 server can operate with SSL/TLS secure connection with its 
> clients.
> [https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide]
> The SSL communication should be possible in the different part of HIVE, where 
> it communicates with Zookeeper servers. The Zookeeper clients are used in the 
> following places:
>  * HiveServer2 PrivilegeSynchronizer
>  * HiveServer2 register/remove server from Zookeeper
>  * HS2ActivePassiveHARegistryClient
>  * ZooKeeperHiveLockManager
>  * LLapZookeeperRegistryImpl
>  * TezAmRegistryImpl
>  * WebHCat ZooKeeperStorage
>  * JDBC Driver server lookup
>  * Metastore - ZookeeperTokenStore
>  * Metastore register/remove server from Zookeeper
> The flag to enable SSL communication and the required parameters should be 
> provided by different configuration parameters, corresponding the different 
> use cases. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23044) Make sure Cleaner doesn't delete delta directories for running queries

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062174#comment-17062174
 ] 

Hive QA commented on HIVE-23044:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997048/HIVE-23044.2.branch-3.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21167/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21167/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21167/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2020-03-19 00:33:23.690
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-21167/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z branch-3.1 ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2020-03-19 00:33:23.692
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 2c5a109 HIVE-22955 PreUpgradeTool can fail because access to 
CharsetDecoder is not synchronized (Gergely Hanko, reviewed by Miklos Gergely)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout branch-3.1
Switched to branch 'branch-3.1'
Your branch is up-to-date with 'origin/branch-3.1'.
+ git reset --hard origin/branch-3.1
HEAD is now at 60fdf3d HIVE-22704: Distribution package incorrectly ships the 
upgrade.order files from the metastore module (Zoltan Haindrich reviewed by 
Naveen Gangam)
+ git merge --ff-only origin/branch-3.1
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2020-03-19 00:33:29.158
+ rm -rf ../yetus_PreCommit-HIVE-Build-21167
+ mkdir ../yetus_PreCommit-HIVE-Build-21167
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-21167
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-21167/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Trying to apply the patch with -p0
fatal: unrecognized input
Trying to apply the patch with -p1
fatal: unrecognized input
Trying to apply the patch with -p2
fatal: unrecognized input
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-21167
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997048 - PreCommit-HIVE-Build

> Make sure Cleaner doesn't delete delta directories for running queries
> --
>
> Key: HIVE-23044
> URL: https://issues.apache.org/jira/browse/HIVE-23044
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Major
> Attachments: HIVE-23044.1.branch-3.1.patch, 
> HIVE-23044.2.branch-3.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062173#comment-17062173
 ] 

Hive QA commented on HIVE-23032:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997035/HIVE-23032.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 18118 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning]
 (batchId=200)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=293)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=293)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomNonExistent
 (batchId=293)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesRead 
(batchId=293)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=293)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryElapsedTime
 (batchId=293)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryExecutionTime
 (batchId=293)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21166/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21166/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21166/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997035 - PreCommit-HIVE-Build

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22841) ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner IllegalArgumentException on invalid cookie signature

2020-03-18 Thread Kat Petre (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062157#comment-17062157
 ] 

Kat Petre commented on HIVE-22841:
--

Thank you guys, much appreciated. 

> ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner 
> IllegalArgumentException on invalid cookie signature
> -
>
> Key: HIVE-22841
> URL: https://issues.apache.org/jira/browse/HIVE-22841
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22841.1.patch, HIVE-22841.2.patch, 
> HIVE-22841.3.patch
>
>
> Currently CookieSigner throws an IllegalArgumentException if the cookie 
> signature is invalid. 
> {code:java}
> if (!MessageDigest.isEqual(originalSignature.getBytes(), 
> currentSignature.getBytes())) {
>   throw new IllegalArgumentException("Invalid sign, original = " + 
> originalSignature +
> " current = " + currentSignature);
> }
> {code}
> CookieSigner is only used in the ThriftHttpServlet#getClientNameFromCookie 
> and doesn't handle the IllegalArgumentException. It is only checking if the 
> value from the cookie is null or not.
> https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java#L295
> {code:java}
>   currValue = signer.verifyAndExtract(currValue);
>   // Retrieve the user name, do the final validation step.
>   if (currValue != null) {
> {code}
> This should be fixed to either:
> a) Have CookieSigner not return an IllegalArgumentException
> b) Improve ThriftHttpServlet to handle CookieSigner throwing an 
> IllegalArgumentException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062141#comment-17062141
 ] 

Hive QA commented on HIVE-23032:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
13s{color} | {color:blue} standalone-metastore/metastore-server in master has 
186 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 13 new + 570 unchanged - 15 fixed = 583 total (was 585) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21166/dev-support/hive-personality.sh
 |
| git revision | master / 2c5a109 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21166/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21166/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23047) Calculate the epoch on DB side

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062114#comment-17062114
 ] 

Hive QA commented on HIVE-23047:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997031/HIVE-23047.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18118 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21165/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21165/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21165/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997031 - PreCommit-HIVE-Build

> Calculate the epoch on DB side
> --
>
> Key: HIVE-23047
> URL: https://issues.apache.org/jira/browse/HIVE-23047
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23047.patch
>
>
> We use TxnHandler.getDbTime to calculate the epoch on the DB server, and 
> immediately insert the value back again. We would be better of by using sql 
> to calculate the value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23051) Clean up BucketCodec

2020-03-18 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062093#comment-17062093
 ] 

David Mollitor commented on HIVE-23051:
---

[~abstractdog] You got me looking at this based on our work on [TEZ-4130]

> Clean up BucketCodec
> 
>
> Key: HIVE-23051
> URL: https://issues.apache.org/jira/browse/HIVE-23051
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HIVE-23051.1.patch
>
>
> A couple of nagging things caught my eye with this class.  The first thing:
> {code:java|title=BucketCodec.java}
>   int statementId = options.getStatementId() >= 0 ? 
> options.getStatementId() : 0;
>   assert this.version >=0 && this.version <= MAX_VERSION
> : "Version out of range: " + version;
>   if(!(options.getBucketId() >= 0 && options.getBucketId() <= 
> MAX_BUCKET_ID)) {
> throw new IllegalArgumentException("bucketId out of range: " + 
> options.getBucketId());
>   }
>   if(!(statementId >= 0 && statementId <= MAX_STATEMENT_ID)) {
> throw new IllegalArgumentException("statementId out of range: " + 
> statementId);
>   }
> {code}
> {{statementId}} gets capped, if it's less than 0, then it gets rounded up to 
> 0.  However, it late checks that the {{statementId}} is greater,... which is 
> will always be since it's getting rounded.  
> # Remove the rounding behavior.
> # Make better error message
> # Fail-fast in the constructor if the version is invalid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23051) Clean up BucketCodec

2020-03-18 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23051:
--
Status: Patch Available  (was: Open)

> Clean up BucketCodec
> 
>
> Key: HIVE-23051
> URL: https://issues.apache.org/jira/browse/HIVE-23051
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HIVE-23051.1.patch
>
>
> A couple of nagging things caught my eye with this class.  The first thing:
> {code:java|title=BucketCodec.java}
>   int statementId = options.getStatementId() >= 0 ? 
> options.getStatementId() : 0;
>   assert this.version >=0 && this.version <= MAX_VERSION
> : "Version out of range: " + version;
>   if(!(options.getBucketId() >= 0 && options.getBucketId() <= 
> MAX_BUCKET_ID)) {
> throw new IllegalArgumentException("bucketId out of range: " + 
> options.getBucketId());
>   }
>   if(!(statementId >= 0 && statementId <= MAX_STATEMENT_ID)) {
> throw new IllegalArgumentException("statementId out of range: " + 
> statementId);
>   }
> {code}
> {{statementId}} gets capped, if it's less than 0, then it gets rounded up to 
> 0.  However, it late checks that the {{statementId}} is greater,... which is 
> will always be since it's getting rounded.  
> # Remove the rounding behavior.
> # Make better error message
> # Fail-fast in the constructor if the version is invalid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23051) Clean up BucketCodec

2020-03-18 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23051:
--
Attachment: HIVE-23051.1.patch

> Clean up BucketCodec
> 
>
> Key: HIVE-23051
> URL: https://issues.apache.org/jira/browse/HIVE-23051
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HIVE-23051.1.patch
>
>
> A couple of nagging things caught my eye with this class.  The first thing:
> {code:java|title=BucketCodec.java}
>   int statementId = options.getStatementId() >= 0 ? 
> options.getStatementId() : 0;
>   assert this.version >=0 && this.version <= MAX_VERSION
> : "Version out of range: " + version;
>   if(!(options.getBucketId() >= 0 && options.getBucketId() <= 
> MAX_BUCKET_ID)) {
> throw new IllegalArgumentException("bucketId out of range: " + 
> options.getBucketId());
>   }
>   if(!(statementId >= 0 && statementId <= MAX_STATEMENT_ID)) {
> throw new IllegalArgumentException("statementId out of range: " + 
> statementId);
>   }
> {code}
> {{statementId}} gets capped, if it's less than 0, then it gets rounded up to 
> 0.  However, it late checks that the {{statementId}} is greater,... which is 
> will always be since it's getting rounded.  
> # Remove the rounding behavior.
> # Make better error message
> # Fail-fast in the constructor if the version is invalid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23051) Clean up BucketCodec

2020-03-18 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor reassigned HIVE-23051:
-


> Clean up BucketCodec
> 
>
> Key: HIVE-23051
> URL: https://issues.apache.org/jira/browse/HIVE-23051
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>
> A couple of nagging things caught my eye with this class.  The first thing:
> {code:java|title=BucketCodec.java}
>   int statementId = options.getStatementId() >= 0 ? 
> options.getStatementId() : 0;
>   assert this.version >=0 && this.version <= MAX_VERSION
> : "Version out of range: " + version;
>   if(!(options.getBucketId() >= 0 && options.getBucketId() <= 
> MAX_BUCKET_ID)) {
> throw new IllegalArgumentException("bucketId out of range: " + 
> options.getBucketId());
>   }
>   if(!(statementId >= 0 && statementId <= MAX_STATEMENT_ID)) {
> throw new IllegalArgumentException("statementId out of range: " + 
> statementId);
>   }
> {code}
> {{statementId}} gets capped, if it's less than 0, then it gets rounded up to 
> 0.  However, it late checks that the {{statementId}} is greater,... which is 
> will always be since it's getting rounded.  
> # Remove the rounding behavior.
> # Make better error message
> # Fail-fast in the constructor if the version is invalid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23047) Calculate the epoch on DB side

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062087#comment-17062087
 ] 

Hive QA commented on HIVE-23047:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
12s{color} | {color:blue} standalone-metastore/metastore-server in master has 
186 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 31 new + 573 unchanged - 11 fixed = 604 total (was 584) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21165/dev-support/hive-personality.sh
 |
| git revision | master / 2c5a109 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21165/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21165/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Calculate the epoch on DB side
> --
>
> Key: HIVE-23047
> URL: https://issues.apache.org/jira/browse/HIVE-23047
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23047.patch
>
>
> We use TxnHandler.getDbTime to calculate the epoch on the DB server, and 
> immediately insert the value back again. We would be better of by using sql 
> to calculate the value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062063#comment-17062063
 ] 

Hive QA commented on HIVE-22997:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997052/HIVE-22997.14.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18119 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestScheduledReplicationScenarios.testExternalTablesReplLoadBootstrapIncr
 (batchId=270)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21164/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21164/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21164/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997052 - PreCommit-HIVE-Build

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.13.patch, HIVE-22997.14.patch, 
> HIVE-22997.2.patch, HIVE-22997.4.patch, HIVE-22997.5.patch, 
> HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21164) ACID: explore how we can avoid a move step during inserts/compaction

2020-03-18 Thread Sungwoo (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062047#comment-17062047
 ] 

Sungwoo commented on HIVE-21164:


[~kuczoram] Do you know if Utilities.handleDirectInsertTableFinalPath() with 
the same arguments may be called more than once from 
FileSinkOperator.jobCloseOp() when running a query? More specifically, assuming 
that we use S3 instead of HDFS, I wonder if the following scenario is feasible, 
or if Utilities.handleDirectInsertTableFinalPath() with the same argument is 
never called more than once.

1. Utilities.handleDirectInsertTableFinalPath() is called
  - manifests[] is computed okay
  - directInsertDirectories[] is computed okay
  - committed[] is computed okay from manifests[]
  - manifest directory is deleted
  - directInsertDirectories[] is inspected against committed[] in 
cleanDirectInsertDirectory(), and no output file is deleted.

2. Utilities.handleDirectInsertTableFinalPath() is called again
  - manifest directory has been deleted, so manifests[] remains empty.
  - directInsertDirectories[] is computed okay
  - committed[] remains empty.
  - directInsertDirectories[] is inspected against committed[] in 
cleanDirectInsertDirectory(), and every output file is deleted because 
commited[] is empty.

This patch works okay when tested with HDFS, but it shows the above behavior 
when tested with S3. (However, this result does not necessarily indicate a bug 
in this patch because I did not use Tez as the execution engine.)

> ACID: explore how we can avoid a move step during inserts/compaction
> 
>
> Key: HIVE-21164
> URL: https://issues.apache.org/jira/browse/HIVE-21164
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21164.1.patch, HIVE-21164.10.patch, 
> HIVE-21164.11.patch, HIVE-21164.11.patch, HIVE-21164.12.patch, 
> HIVE-21164.13.patch, HIVE-21164.14.patch, HIVE-21164.14.patch, 
> HIVE-21164.15.patch, HIVE-21164.16.patch, HIVE-21164.17.patch, 
> HIVE-21164.18.patch, HIVE-21164.19.patch, HIVE-21164.2.patch, 
> HIVE-21164.20.patch, HIVE-21164.21.patch, HIVE-21164.22.patch, 
> HIVE-21164.3.patch, HIVE-21164.4.patch, HIVE-21164.5.patch, 
> HIVE-21164.6.patch, HIVE-21164.7.patch, HIVE-21164.8.patch, HIVE-21164.9.patch
>
>
> Currently, we write compacted data to a temporary location and then move the 
> files to a final location, which is an expensive operation on some cloud file 
> systems. Since HIVE-20823 is already in, it can control the visibility of 
> compacted data for the readers. Therefore, we can perhaps avoid writing data 
> to a temporary location and directly write compacted data to the intended 
> final path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23049:
--
Status: Patch Available  (was: In Progress)

> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch, HIVE-23049.02.patch
>
>
> Running with Oracle19 the parameters present to a query in datanucleus are 
> not boxed, thus it must be explicitly set to Long and not long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23049:
--
Attachment: HIVE-23049.02.patch

> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch, HIVE-23049.02.patch
>
>
> Running with Oracle19 the parameters present to a query in datanucleus are 
> not boxed, thus it must be explicitly set to Long and not long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23049:
--
Attachment: (was: HIVE-23049.02.patch)

> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch, HIVE-23049.02.patch
>
>
> Running with Oracle19 the parameters present to a query in datanucleus are 
> not boxed, thus it must be explicitly set to Long and not long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062037#comment-17062037
 ] 

Hive QA commented on HIVE-22997:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
38s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 1 new + 79 unchanged - 2 fixed 
= 80 total (was 81) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 649 
unchanged - 1 fixed = 649 total (was 650) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
58s{color} | {color:red} ql generated 1 new + 1530 unchanged - 1 fixed = 1531 
total (was 1531) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  org.apache.hadoop.hive.ql.exec.repl.ReplDumpWork is Serializable; 
consider declaring a serialVersionUID  At ReplDumpWork.java:a serialVersionUID  
At ReplDumpWork.java:[lines 41-140] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21164/dev-support/hive-personality.sh
 |
| git revision | master / 2c5a109 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21164/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21164/yetus/new-findbugs-ql.html
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21164/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.

[jira] [Commented] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062029#comment-17062029
 ] 

Jesus Camacho Rodriguez commented on HIVE-23049:


[~mgergely], can you add a comment above that line explaining the reason to box 
the value? That way we will be able to find this easier next time we hit the 
problem.

Other than that, LGTM +1

> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch, HIVE-23049.02.patch
>
>
> Running with Oracle19 the parameters present to a query in datanucleus are 
> not boxed, thus it must be explicitly set to Long and not long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23050) Partition pruning cache miss during compilation

2020-03-18 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-23050:
--


> Partition pruning cache miss during compilation
> ---
>
> Key: HIVE-23050
> URL: https://issues.apache.org/jira/browse/HIVE-23050
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> {code:sql}
> create table pcr_t1 (key int, value string) partitioned by (ds string);
> insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
> where key < 20 order by key;
> explain extended select key, value, ds from pcr_t1 where (ds='2000-04-08' and 
> key=1) or (ds='2000-04-09' and key=2) order by key, value, ds
> {code}
> During query compilation HivePartitionPruner fetches list of partition and 
> caches it, later PCR (partition condition removal) tries to get pruned 
> partitions but due to cache miss, request goes to metastore server to 
> retrieve pruned partitions using listPartitions.
> Improvement here would be to use the list of partitions already cached to do 
> the partition pruning for PCR or pruning in general
> (I am not sure why HivePartitionPruner isn't able to do partition pruning in 
> the first place)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23049:
--
Description: Running with Oracle19 the parameters present to a query in 
datanucleus are not boxed, thus it must be explicitly set to Long and not long. 
 (was: Running with Oracle19 the parameters present to a query in datanucleus 
are not autoboxed, thus it must be explicitly set to Long and not long.)

> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch, HIVE-23049.02.patch
>
>
> Running with Oracle19 the parameters present to a query in datanucleus are 
> not boxed, thus it must be explicitly set to Long and not long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23042) Merge queries to a single one for updating MIN_OPEN_TXNS table

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17062023#comment-17062023
 ] 

Hive QA commented on HIVE-23042:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997014/HIVE-23042.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18118 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21163/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21163/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21163/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997014 - PreCommit-HIVE-Build

> Merge queries to a single one for updating MIN_OPEN_TXNS table
> --
>
> Key: HIVE-23042
> URL: https://issues.apache.org/jira/browse/HIVE-23042
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23042.patch
>
>
> When opening a new transaction we issue 2 queries to update the MIN_OPEN_TXN 
> table.
> {code}
> 
>  values(763, 763)>
> {code}
> This could be archived with a single query faster, if we do not open 
> transactions in batch, like:
> {code}
>SELECT ?, MIN("TXN_ID") FROM "TXNS" WHERE "TXN_STATE" = 'o'>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23049:
--
Attachment: HIVE-23049.02.patch

> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch, HIVE-23049.02.patch
>
>
> Running with Oracle19 the parameters present to a query in datanucleus are 
> not autoboxed, thus it must be explicitly set to Long and not long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Peter Varga (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Varga updated HIVE-23045:
---
Status: Open  (was: Patch Available)

> Zookeeper SSL/TLS support
> -
>
> Key: HIVE-23045
> URL: https://issues.apache.org/jira/browse/HIVE-23045
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC, Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Critical
> Attachments: HIVE-23045.1.patch, HIVE-23045.2.patch, 
> HIVE-23045.3.patch
>
>
> Zookeeper 3.5.5 server can operate with SSL/TLS secure connection with its 
> clients.
> [https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide]
> The SSL communication should be possible in the different part of HIVE, where 
> it communicates with Zookeeper servers. The Zookeeper clients are used in the 
> following places:
>  * HiveServer2 PrivilegeSynchronizer
>  * HiveServer2 register/remove server from Zookeeper
>  * HS2ActivePassiveHARegistryClient
>  * ZooKeeperHiveLockManager
>  * LLapZookeeperRegistryImpl
>  * TezAmRegistryImpl
>  * WebHCat ZooKeeperStorage
>  * JDBC Driver server lookup
>  * Metastore - ZookeeperTokenStore
>  * Metastore register/remove server from Zookeeper
> The flag to enable SSL communication and the required parameters should be 
> provided by different configuration parameters, corresponding the different 
> use cases. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Peter Varga (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Varga updated HIVE-23045:
---
Status: Patch Available  (was: Open)

> Zookeeper SSL/TLS support
> -
>
> Key: HIVE-23045
> URL: https://issues.apache.org/jira/browse/HIVE-23045
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC, Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Critical
> Attachments: HIVE-23045.1.patch, HIVE-23045.2.patch, 
> HIVE-23045.3.patch
>
>
> Zookeeper 3.5.5 server can operate with SSL/TLS secure connection with its 
> clients.
> [https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide]
> The SSL communication should be possible in the different part of HIVE, where 
> it communicates with Zookeeper servers. The Zookeeper clients are used in the 
> following places:
>  * HiveServer2 PrivilegeSynchronizer
>  * HiveServer2 register/remove server from Zookeeper
>  * HS2ActivePassiveHARegistryClient
>  * ZooKeeperHiveLockManager
>  * LLapZookeeperRegistryImpl
>  * TezAmRegistryImpl
>  * WebHCat ZooKeeperStorage
>  * JDBC Driver server lookup
>  * Metastore - ZookeeperTokenStore
>  * Metastore register/remove server from Zookeeper
> The flag to enable SSL communication and the required parameters should be 
> provided by different configuration parameters, corresponding the different 
> use cases. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Peter Varga (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Varga updated HIVE-23045:
---
Attachment: HIVE-23045.3.patch

> Zookeeper SSL/TLS support
> -
>
> Key: HIVE-23045
> URL: https://issues.apache.org/jira/browse/HIVE-23045
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC, Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Critical
> Attachments: HIVE-23045.1.patch, HIVE-23045.2.patch, 
> HIVE-23045.3.patch
>
>
> Zookeeper 3.5.5 server can operate with SSL/TLS secure connection with its 
> clients.
> [https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide]
> The SSL communication should be possible in the different part of HIVE, where 
> it communicates with Zookeeper servers. The Zookeeper clients are used in the 
> following places:
>  * HiveServer2 PrivilegeSynchronizer
>  * HiveServer2 register/remove server from Zookeeper
>  * HS2ActivePassiveHARegistryClient
>  * ZooKeeperHiveLockManager
>  * LLapZookeeperRegistryImpl
>  * TezAmRegistryImpl
>  * WebHCat ZooKeeperStorage
>  * JDBC Driver server lookup
>  * Metastore - ZookeeperTokenStore
>  * Metastore register/remove server from Zookeeper
> The flag to enable SSL communication and the required parameters should be 
> provided by different configuration parameters, corresponding the different 
> use cases. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23049:
--
Description: Running with Oracle19 the parameters present to a query in 
datanucleus are not autoboxed, thus it must be explicitly set to Long and not 
long.  (was: For some reason the current unique constraint name check query is 
not working with Oracle19. We should use another API call.)

> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch
>
>
> Running with Oracle19 the parameters present to a query in datanucleus are 
> not autoboxed, thus it must be explicitly set to Long and not long.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23049) Constraint name uniqueness query should set Long as parameter instead of long

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23049:
--
Summary: Constraint name uniqueness query should set Long as parameter 
instead of long  (was: Modify constraint name uniqueness query to run with 
Oracle19 as well)

> Constraint name uniqueness query should set Long as parameter instead of long
> -
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch
>
>
> For some reason the current unique constraint name check query is not working 
> with Oracle19. We should use another API call.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Peter Varga (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Varga updated HIVE-23045:
---
Status: In Progress  (was: Patch Available)

> Zookeeper SSL/TLS support
> -
>
> Key: HIVE-23045
> URL: https://issues.apache.org/jira/browse/HIVE-23045
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC, Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Critical
> Attachments: HIVE-23045.1.patch, HIVE-23045.2.patch
>
>
> Zookeeper 3.5.5 server can operate with SSL/TLS secure connection with its 
> clients.
> [https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide]
> The SSL communication should be possible in the different part of HIVE, where 
> it communicates with Zookeeper servers. The Zookeeper clients are used in the 
> following places:
>  * HiveServer2 PrivilegeSynchronizer
>  * HiveServer2 register/remove server from Zookeeper
>  * HS2ActivePassiveHARegistryClient
>  * ZooKeeperHiveLockManager
>  * LLapZookeeperRegistryImpl
>  * TezAmRegistryImpl
>  * WebHCat ZooKeeperStorage
>  * JDBC Driver server lookup
>  * Metastore - ZookeeperTokenStore
>  * Metastore register/remove server from Zookeeper
> The flag to enable SSL communication and the required parameters should be 
> provided by different configuration parameters, corresponding the different 
> use cases. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Peter Varga (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Varga updated HIVE-23045:
---
Attachment: HIVE-23045.2.patch

> Zookeeper SSL/TLS support
> -
>
> Key: HIVE-23045
> URL: https://issues.apache.org/jira/browse/HIVE-23045
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC, Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Critical
> Attachments: HIVE-23045.1.patch, HIVE-23045.2.patch
>
>
> Zookeeper 3.5.5 server can operate with SSL/TLS secure connection with its 
> clients.
> [https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide]
> The SSL communication should be possible in the different part of HIVE, where 
> it communicates with Zookeeper servers. The Zookeeper clients are used in the 
> following places:
>  * HiveServer2 PrivilegeSynchronizer
>  * HiveServer2 register/remove server from Zookeeper
>  * HS2ActivePassiveHARegistryClient
>  * ZooKeeperHiveLockManager
>  * LLapZookeeperRegistryImpl
>  * TezAmRegistryImpl
>  * WebHCat ZooKeeperStorage
>  * JDBC Driver server lookup
>  * Metastore - ZookeeperTokenStore
>  * Metastore register/remove server from Zookeeper
> The flag to enable SSL communication and the required parameters should be 
> provided by different configuration parameters, corresponding the different 
> use cases. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Peter Varga (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Varga updated HIVE-23045:
---
Status: Patch Available  (was: In Progress)

> Zookeeper SSL/TLS support
> -
>
> Key: HIVE-23045
> URL: https://issues.apache.org/jira/browse/HIVE-23045
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, JDBC, Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Critical
> Attachments: HIVE-23045.1.patch, HIVE-23045.2.patch
>
>
> Zookeeper 3.5.5 server can operate with SSL/TLS secure connection with its 
> clients.
> [https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide]
> The SSL communication should be possible in the different part of HIVE, where 
> it communicates with Zookeeper servers. The Zookeeper clients are used in the 
> following places:
>  * HiveServer2 PrivilegeSynchronizer
>  * HiveServer2 register/remove server from Zookeeper
>  * HS2ActivePassiveHARegistryClient
>  * ZooKeeperHiveLockManager
>  * LLapZookeeperRegistryImpl
>  * TezAmRegistryImpl
>  * WebHCat ZooKeeperStorage
>  * JDBC Driver server lookup
>  * Metastore - ZookeeperTokenStore
>  * Metastore register/remove server from Zookeeper
> The flag to enable SSL communication and the required parameters should be 
> provided by different configuration parameters, corresponding the different 
> use cases. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23042) Merge queries to a single one for updating MIN_OPEN_TXNS table

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061998#comment-17061998
 ] 

Hive QA commented on HIVE-23042:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
11s{color} | {color:blue} standalone-metastore/metastore-server in master has 
186 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} 
standalone-metastore/metastore-tools/metastore-benchmarks in master has 3 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} standalone-metastore/metastore-tools/tools-common: The 
patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
20s{color} | {color:red} standalone-metastore/metastore-server generated 2 new 
+ 186 unchanged - 0 fixed = 188 total (was 186) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} standalone-metastore/metastore-tools/tools-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(Connection, 
Statement, OpenTxnRequest) may fail to clean up java.sql.ResultSet  Obligation 
to clean up resource created at TxnHandler.java:to clean up java.sql.ResultSet  
Obligation to clean up resource created at TxnHandler.java:[line 599] is not 
discharged |
|  |  A prepared statement is generated from a nonconstant String in 
org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(Connection, Statement, 
OpenTxnRequest)   At TxnHandler.java:from a nonconstant String in 
org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(Connection, Statement, 
OpenTxnRequest)   At TxnHandler.java:[line 637] |
| FindBugs | module:standalone-metastore/metastore-tools/tools-common |
|  |  Return value of 
org.apache.hadoop.hive.metastore.api.OpenTxnsResponse.getTxn_ids() ignored, but 
method has no side effect  At HMSClient.java:but method has no side effect  At 
HMSClient.java:[line 331] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21163/dev-support/hive-personality.sh
 |
| git revision | master / 2c5a109 |
| 

[jira] [Commented] (HIVE-23033) MSSQL metastore schema init script doesn't initialize NOTIFICATION_SEQUENCE

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061983#comment-17061983
 ] 

Hive QA commented on HIVE-23033:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997018/HIVE-23033.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18118 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21162/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21162/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21162/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997018 - PreCommit-HIVE-Build

> MSSQL metastore schema init script doesn't initialize NOTIFICATION_SEQUENCE
> ---
>
> Key: HIVE-23033
> URL: https://issues.apache.org/jira/browse/HIVE-23033
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0, 3.1.1, 3.1.2
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-23033.1.patch, HIVE-23033.2.branch-3.patch, 
> HIVE-23033.2.patch
>
>
> * The inital value for this table in the schema scripts was removed in 
> HIVE-17566: 
> https://github.com/apache/hive/commit/32b7abac961ca3879d23b074357f211fc7c49131#diff-3d1a4bae0d5d53c8e4ea79951ebf5eceL598
> * This was fixed in a number of scripts in HIVE-18781, but not for mssql: 
> https://github.com/apache/hive/commit/59483bca262880d3e7ef1b873d3c21176e9294cb#diff-4f43efd5a45cc362cb138287d90dbf82
> * This is as is since then
> When using the schematool, the table gets initialized by other means.
> This could be backported to all active branches for 3.x as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22997:
--
Attachment: HIVE-22997.14.patch

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.13.patch, HIVE-22997.14.patch, 
> HIVE-22997.2.patch, HIVE-22997.4.patch, HIVE-22997.5.patch, 
> HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22997:
--
Attachment: (was: HIVE-22997.14.patch)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.13.patch, HIVE-22997.2.patch, 
> HIVE-22997.4.patch, HIVE-22997.5.patch, HIVE-22997.6.patch, 
> HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22955) PreUpgradeTool can fail because access to CharsetDecoder is not synchronized

2020-03-18 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hankó Gergely updated HIVE-22955:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> PreUpgradeTool can fail because access to CharsetDecoder is not synchronized
> 
>
> Key: HIVE-22955
> URL: https://issues.apache.org/jira/browse/HIVE-22955
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Hankó Gergely
>Assignee: Hankó Gergely
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22955.1.patch, HIVE-22955.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2020-02-26 20:22:49,683 ERROR [main] acid.PreUpgradeTool 
> (PreUpgradeTool.java:main(150)) - PreUpgradeTool failed 
> org.apache.hadoop.hive.ql.metadata.HiveException at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.prepareAcidUpgradeInternal(PreUpgradeTool.java:283)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.main(PreUpgradeTool.java:146)
>  Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> java.lang.RuntimeException: java.lang.RuntimeException: 
> java.lang.IllegalStateException: Current state = RESET, new state = FLUSHED
> ...
> Caused by: java.lang.IllegalStateException: Current state = RESET, new state 
> = FLUSHED at 
> java.nio.charset.CharsetDecoder.throwIllegalStateException(CharsetDecoder.java:992)
>  at java.nio.charset.CharsetDecoder.flush(CharsetDecoder.java:675) at 
> java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:804) at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:606)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:567)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.getCompactionCommands(PreUpgradeTool.java:464)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.processTable(PreUpgradeTool.java:374)
> {code}
> This is probably caused by HIVE-21948.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22955) PreUpgradeTool can fail because access to CharsetDecoder is not synchronized

2020-03-18 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061963#comment-17061963
 ] 

Hankó Gergely commented on HIVE-22955:
--

It was merged with 
[https://github.com/apache/hive/commit/2c5a1095b1f39cbfdfbcf472f21b11239ad1b49e]

> PreUpgradeTool can fail because access to CharsetDecoder is not synchronized
> 
>
> Key: HIVE-22955
> URL: https://issues.apache.org/jira/browse/HIVE-22955
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Hankó Gergely
>Assignee: Hankó Gergely
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22955.1.patch, HIVE-22955.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2020-02-26 20:22:49,683 ERROR [main] acid.PreUpgradeTool 
> (PreUpgradeTool.java:main(150)) - PreUpgradeTool failed 
> org.apache.hadoop.hive.ql.metadata.HiveException at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.prepareAcidUpgradeInternal(PreUpgradeTool.java:283)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.main(PreUpgradeTool.java:146)
>  Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> java.lang.RuntimeException: java.lang.RuntimeException: 
> java.lang.IllegalStateException: Current state = RESET, new state = FLUSHED
> ...
> Caused by: java.lang.IllegalStateException: Current state = RESET, new state 
> = FLUSHED at 
> java.nio.charset.CharsetDecoder.throwIllegalStateException(CharsetDecoder.java:992)
>  at java.nio.charset.CharsetDecoder.flush(CharsetDecoder.java:675) at 
> java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:804) at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:606)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:567)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.getCompactionCommands(PreUpgradeTool.java:464)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.processTable(PreUpgradeTool.java:374)
> {code}
> This is probably caused by HIVE-21948.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22997:
--
Attachment: HIVE-22997.14.patch

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.13.patch, HIVE-22997.14.patch, 
> HIVE-22997.2.patch, HIVE-22997.4.patch, HIVE-22997.5.patch, 
> HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23033) MSSQL metastore schema init script doesn't initialize NOTIFICATION_SEQUENCE

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061926#comment-17061926
 ] 

Hive QA commented on HIVE-23033:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21162/dev-support/hive-personality.sh
 |
| git revision | master / 2c5a109 |
| Default Java | 1.8.0_111 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21162/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> MSSQL metastore schema init script doesn't initialize NOTIFICATION_SEQUENCE
> ---
>
> Key: HIVE-23033
> URL: https://issues.apache.org/jira/browse/HIVE-23033
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 3.1.0, 3.1.1, 3.1.2
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-23033.1.patch, HIVE-23033.2.branch-3.patch, 
> HIVE-23033.2.patch
>
>
> * The inital value for this table in the schema scripts was removed in 
> HIVE-17566: 
> https://github.com/apache/hive/commit/32b7abac961ca3879d23b074357f211fc7c49131#diff-3d1a4bae0d5d53c8e4ea79951ebf5eceL598
> * This was fixed in a number of scripts in HIVE-18781, but not for mssql: 
> https://github.com/apache/hive/commit/59483bca262880d3e7ef1b873d3c21176e9294cb#diff-4f43efd5a45cc362cb138287d90dbf82
> * This is as is since then
> When using the schematool, the table gets initialized by other means.
> This could be backported to all active branches for 3.x as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061910#comment-17061910
 ] 

Hive QA commented on HIVE-23045:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997010/HIVE-23045.1.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 95 failed/errored test(s), 18122 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterPartition 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterTable 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterTableCascade
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testAlterViewParititon
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testColumnStatistics 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testComplexTable 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testComplexTypeApi 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testConcurrentMetastores
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testCreateAndGetTableWithDriver
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testCreateTableSettingId
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBLocationChange 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBOwner 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDBOwnerChange 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabase 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabaseLocation 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDatabaseLocationWithPermissionProblems
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDropDatabaseCascadeMVMultiDB
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testDropTable 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFilterLastPartition
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFilterSinglePartition
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testFunctionWithResources
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetConfigValue 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetMetastoreUuid 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetPartitionsWithSpec
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetSchemaWithNoClassDefFoundError
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetTableObjects 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testGetUUIDInParallel
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testJDOPersistanceManagerCleanup
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitionNames
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitions 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testListPartitionsWihtLimitEnabled
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testNameMethods 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testPartition 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testPartitionFilter 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testPartitionFilterLike
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testRenamePartition 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testRetriableClientWithConnLifetime
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleFunction 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleTable 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSimpleTypeApi 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testStatsFastTrivial 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testSynchronized 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testTableDatabase 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testTableFilter 
(batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testUpdatePartitionStat_doesNotUpdateStats
 (batchId=234)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStoreZK.testValidateTableCols
 

[jira] [Updated] (HIVE-23044) Make sure Cleaner doesn't delete delta directories for running queries

2020-03-18 Thread Zoltan Chovan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Chovan updated HIVE-23044:
-
Attachment: HIVE-23044.2.branch-3.1.patch

> Make sure Cleaner doesn't delete delta directories for running queries
> --
>
> Key: HIVE-23044
> URL: https://issues.apache.org/jira/browse/HIVE-23044
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Major
> Attachments: HIVE-23044.1.branch-3.1.patch, 
> HIVE-23044.2.branch-3.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (HIVE-23044) Make sure Cleaner doesn't delete delta directories for running queries

2020-03-18 Thread Zoltan Chovan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Chovan updated HIVE-23044:
-
Comment: was deleted

(was: uploading empty patchfile to establish baseline)

> Make sure Cleaner doesn't delete delta directories for running queries
> --
>
> Key: HIVE-23044
> URL: https://issues.apache.org/jira/browse/HIVE-23044
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Major
> Attachments: HIVE-23044.1.branch-3.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=405578&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405578
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 16:47
Start Date: 18/Mar/20 16:47
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #951: HIVE-22997 : Copy 
external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r394492450
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java
 ##
 @@ -193,20 +193,29 @@ private Long getEventFromPreviousDumpMetadata(Path 
previousDumpPath) throws Sema
   }
 
   private Path getPreviousDumpMetadataPath(Path dumpRoot) throws IOException {
+FileStatus latestUpdatedStatus = null;
 FileSystem fs = dumpRoot.getFileSystem(conf);
 if (fs.exists(dumpRoot)) {
   FileStatus[] statuses = fs.listStatus(dumpRoot);
   if (statuses.length > 0)  {
 
 Review comment:
   It will return empty array
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405578)
Time Spent: 6h  (was: 5h 50m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.13.patch, HIVE-22997.2.patch, 
> HIVE-22997.4.patch, HIVE-22997.5.patch, HIVE-22997.6.patch, 
> HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23045) Zookeeper SSL/TLS support

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061889#comment-17061889
 ] 

Hive QA commented on HIVE-23045:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
33s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
35s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} llap-client in master has 27 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
12s{color} | {color:blue} standalone-metastore/metastore-server in master has 
186 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
46s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} service in master has 50 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} jdbc in master has 16 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} hcatalog/webhcat/svr in master has 96 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} standalone-metastore/metastore-common: The patch 
generated 50 new + 86 unchanged - 0 fixed = 136 total (was 86) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} common: The patch generated 8 new + 374 unchanged - 0 
fixed = 382 total (was 374) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} llap-client: The patch generated 5 new + 43 unchanged 
- 0 fixed = 48 total (was 43) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 10 new + 36 unchanged - 1 fixed = 46 total (was 37) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 4 new + 7 unchanged - 3 fixed 
= 11 total (was 10) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} service: The patch generated 0 new + 37 unchanged - 
1 fixed = 37 total (was 38) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} jdbc: The patch generated 1 new + 23 unchanged - 0 
fixed = 24 total (was 23) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hcatalog/webhcat/svr: The patch generated 5 new + 19 
unchanged - 2 fixed = 24 tot

[jira] [Commented] (HIVE-23044) Make sure Cleaner doesn't delete delta directories for running queries

2020-03-18 Thread Zoltan Chovan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061887#comment-17061887
 ] 

Zoltan Chovan commented on HIVE-23044:
--

uploading empty patchfile to establish baseline

> Make sure Cleaner doesn't delete delta directories for running queries
> --
>
> Key: HIVE-23044
> URL: https://issues.apache.org/jira/browse/HIVE-23044
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Major
> Attachments: HIVE-23044.1.branch-3.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23049) Modify constraint name uniqueness query to run with Oracle19 as well

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23049:
--
Attachment: HIVE-23049.01.patch

> Modify constraint name uniqueness query to run with Oracle19 as well
> 
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch
>
>
> For some reason the current unique constraint name check query is not working 
> with Oracle19. We should use another API call.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-23049) Modify constraint name uniqueness query to run with Oracle19 as well

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-23049 started by Miklos Gergely.
-
> Modify constraint name uniqueness query to run with Oracle19 as well
> 
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23049.01.patch
>
>
> For some reason the current unique constraint name check query is not working 
> with Oracle19. We should use another API call.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23034) Arrow serializer should not keep the reference of arrow offset and validity buffers

2020-03-18 Thread Thejas Nair (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061867#comment-17061867
 ] 

Thejas Nair commented on HIVE-23034:


+1

 

> Arrow serializer should not keep the reference of arrow offset and validity 
> buffers
> ---
>
> Key: HIVE-23034
> URL: https://issues.apache.org/jira/browse/HIVE-23034
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Serializers/Deserializers
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23034.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, a part of writeList() method in arrow serializer is implemented 
> like - 
> {code:java}
> final ArrowBuf offsetBuffer = arrowVector.getOffsetBuffer();
> int nextOffset = 0;
> for (int rowIndex = 0; rowIndex < size; rowIndex++) {
>   int selectedIndex = rowIndex;
>   if (vectorizedRowBatch.selectedInUse) {
> selectedIndex = vectorizedRowBatch.selected[rowIndex];
>   }
>   if (hiveVector.isNull[selectedIndex]) {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
>   } else {
> offsetBuffer.setInt(rowIndex * OFFSET_WIDTH, nextOffset);
> nextOffset += (int) hiveVector.lengths[selectedIndex];
> arrowVector.setNotNull(rowIndex);
>   }
> }
> offsetBuffer.setInt(size * OFFSET_WIDTH, nextOffset);
> {code}
> 1) Here we obtain a reference to {{final ArrowBuf offsetBuffer = 
> arrowVector.getOffsetBuffer();}} and keep updating the arrow vector and 
> offset vector. 
> Problem - 
> {{arrowVector.setNotNull(rowIndex)}} keeps checking the index and reallocates 
> the offset and validity buffers when a threshold is crossed, updates the 
> references internally and also releases the old buffers (which decrements the 
> buffer reference count). Now the reference which we obtained in 1) becomes 
> obsolete. Furthermore if try to read or write old buffer, we see - 
> {code:java}
> Caused by: io.netty.util.IllegalReferenceCountException: refCnt: 0
>   at 
> io.netty.buffer.AbstractByteBuf.ensureAccessible(AbstractByteBuf.java:1413)
>   at io.netty.buffer.ArrowBuf.checkIndexD(ArrowBuf.java:131)
>   at io.netty.buffer.ArrowBuf.chk(ArrowBuf.java:162)
>   at io.netty.buffer.ArrowBuf.setInt(ArrowBuf.java:656)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:432)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeStruct(Serializer.java:352)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:288)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:419)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:285)
>   at 
> org.apache.hadoop.hive.ql.io.arrow.Serializer.serializeBatch(Serializer.java:205)
> {code}
>  
> Solution - 
> This can be fixed by getting the buffers each time ( 
> {{arrowVector.getOffsetBuffer()}} ) we want to update them. 
> In our internal tests, this is very frequently seen on arrow 0.8.0 but not on 
> 0.10.0 but should be handled the same way for 0.10.0 too as it does the same 
> thing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23049) Modify constraint name uniqueness query to run with Oracle19 as well

2020-03-18 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely reassigned HIVE-23049:
-


> Modify constraint name uniqueness query to run with Oracle19 as well
> 
>
> Key: HIVE-23049
> URL: https://issues.apache.org/jira/browse/HIVE-23049
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>
> For some reason the current unique constraint name check query is not working 
> with Oracle19. We should use another API call.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060441#comment-17060441
 ] 

Denys Kuzmenko edited comment on HIVE-23032 at 3/18/20, 4:06 PM:
-

[~pvary], very close results for MySQL, however in case of Oracle/Postgres it 
gives better performance. 
4 tables x 1800 partitions

PostgreSQL v9.3
{code}
batch insert
lockReq: {count=200, sum=720917, min=1790, average=3604.585000, max=6952}
total: {count=200, sum=430610, min=749, average=2153.05, max=5913}

batch insert + reWriteBatchedInserts 
lockReq: {count=200, sum=488272, min=1370, average=2441.36, max=3813}
total: {count=200, sum=283623, min=667, average=1418.115000, max=2780}

Multi-row insert
lockReq: {count=200, sum=771952, min=1853, average=3859.76, max=7817}
total: {count=200, sum=455352, min=768, average=2276.76, max=5880}
{code}

MySQL v5.1.46
{code}
batch insert + rewriteBatchedStatements
lockReq: {count=1000, sum=717697, min=408, average=717.697000, max=1814}
total: {count=1000, sum=2490528, min=1347, average=2490.528000, max=4341}

Multi-row insert
lockReq: {count=1000, sum=710006, min=393, average=710.01, max=2242}
total: {count=1000, sum=2229274, min=1223, average=2229.27, max=5600}
{code}


was (Author: dkuzmenko):
[~pvary], same for MySQL, however for Oracle/Postgres it gives better 
performance. 

*postgres:9.3* (4 tables x 1800 partitions)
{code}
batch insert
lockReq: {count=200, sum=720917, min=1790, average=3604.585000, max=6952}
total: {count=200, sum=430610, min=749, average=2153.05, max=5913}

batch insert + reWriteBatchedInserts 
lockReq: {count=200, sum=488272, min=1370, average=2441.36, max=3813}
total: {count=200, sum=283623, min=667, average=1418.115000, max=2780}

Multi-row insert
lockReq: {count=200, sum=771952, min=1853, average=3859.76, max=7817}
total: {count=200, sum=455352, min=768, average=2276.76, max=5880}
{code}

MySQL v5.1.46
{code}
batch insert + rewriteBatchedStatements
lockReq: {count=1000, sum=717697, min=408, average=717.697000, max=1814}
total: {count=1000, sum=2490528, min=1347, average=2490.528000, max=4341}

Multi-row insert
lockReq: {count=1000, sum=710006, min=393, average=710.01, max=2242}
total: {count=1000, sum=2229274, min=1223, average=2229.27, max=5600}
{code}

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=405541&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405541
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 16:06
Start Date: 18/Mar/20 16:06
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r394464144
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java
 ##
 @@ -193,20 +193,29 @@ private Long getEventFromPreviousDumpMetadata(Path 
previousDumpPath) throws Sema
   }
 
   private Path getPreviousDumpMetadataPath(Path dumpRoot) throws IOException {
+FileStatus latestUpdatedStatus = null;
 FileSystem fs = dumpRoot.getFileSystem(conf);
 if (fs.exists(dumpRoot)) {
   FileStatus[] statuses = fs.listStatus(dumpRoot);
   if (statuses.length > 0)  {
 
 Review comment:
   Is fs.listStatus() guaranteed to be not null even when the the path does not 
have any folder/files?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405541)
Time Spent: 5h 50m  (was: 5h 40m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.13.patch, HIVE-22997.2.patch, 
> HIVE-22997.4.patch, HIVE-22997.5.patch, HIVE-22997.6.patch, 
> HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060441#comment-17060441
 ] 

Denys Kuzmenko edited comment on HIVE-23032 at 3/18/20, 4:04 PM:
-

[~pvary], same for MySQL, however for Oracle/Postgres it gives better 
performance. 

*postgres:9.3* (4 tables x 1800 partitions)
{code}
batch insert
lockReq: {count=200, sum=720917, min=1790, average=3604.585000, max=6952}
total: {count=200, sum=430610, min=749, average=2153.05, max=5913}

batch insert + reWriteBatchedInserts 
lockReq: {count=200, sum=488272, min=1370, average=2441.36, max=3813}
total: {count=200, sum=283623, min=667, average=1418.115000, max=2780}

Multi-row insert
lockReq: {count=200, sum=771952, min=1853, average=3859.76, max=7817}
total: {count=200, sum=455352, min=768, average=2276.76, max=5880}
{code}

MySQL v5.1.46
{code}
batch insert + rewriteBatchedStatements
lockReq: {count=1000, sum=717697, min=408, average=717.697000, max=1814}
total: {count=1000, sum=2490528, min=1347, average=2490.528000, max=4341}

Multi-row insert
lockReq: {count=1000, sum=710006, min=393, average=710.01, max=2242}
total: {count=1000, sum=2229274, min=1223, average=2229.27, max=5600}
{code}


was (Author: dkuzmenko):
[~pvary], same for MySQL, however for Oracle/Postgres it gives better 
performance. 

*postgres:9.3* (4 tables x 1800 partitions)
{code}
batch insert
lockReq: {count=200, sum=720917, min=1790, average=3604.585000, max=6952}
total: {count=200, sum=430610, min=749, average=2153.05, max=5913}

batch insert+ reWriteBatchedInserts 
lockReq: {count=200, sum=488272, min=1370, average=2441.36, max=3813}
total: {count=200, sum=283623, min=667, average=1418.115000, max=2780}

Multi-row insert
lockReq: {count=200, sum=771952, min=1853, average=3859.76, max=7817}
total: {count=200, sum=455352, min=768, average=2276.76, max=5880}
{code}

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=405536&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405536
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 15:59
Start Date: 18/Mar/20 15:59
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #951: HIVE-22997 : Copy 
external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r394459376
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java
 ##
 @@ -193,20 +193,29 @@ private Long getEventFromPreviousDumpMetadata(Path 
previousDumpPath) throws Sema
   }
 
   private Path getPreviousDumpMetadataPath(Path dumpRoot) throws IOException {
+FileStatus latestUpdatedStatus = null;
 FileSystem fs = dumpRoot.getFileSystem(conf);
 if (fs.exists(dumpRoot)) {
   FileStatus[] statuses = fs.listStatus(dumpRoot);
   if (statuses.length > 0)  {
 
 Review comment:
   We don't need this check
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405536)
Time Spent: 5h 40m  (was: 5.5h)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.13.patch, HIVE-22997.2.patch, 
> HIVE-22997.4.patch, HIVE-22997.5.patch, HIVE-22997.6.patch, 
> HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22997:
--
Attachment: HIVE-22997.13.patch

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.13.patch, HIVE-22997.2.patch, 
> HIVE-22997.4.patch, HIVE-22997.5.patch, HIVE-22997.6.patch, 
> HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23044) Make sure Cleaner doesn't delete delta directories for running queries

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061815#comment-17061815
 ] 

Hive QA commented on HIVE-23044:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997004/HIVE-23044.1.branch-3.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 141 failed/errored test(s), 14415 tests 
executed
*Failed tests:*
{noformat}
TestAddPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestAddPartitionsFromPartSpec - did not produce a TEST-*.xml file (likely timed 
out) (batchId=228)
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestAggregateStatsCache - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestAlterPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestAppendPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=274)
TestCachedStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestCatalogCaching - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestCatalogNonDefaultClient - did not produce a TEST-*.xml file (likely timed 
out) (batchId=226)
TestCatalogNonDefaultSvr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestCatalogOldClient - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestCatalogs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestChainFilter - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestCheckConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestCloseableThreadLocal - did not produce a TEST-*.xml file (likely timed out) 
(batchId=333)
TestCustomQueryFilter - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=237)
TestDatabaseName - did not produce a TEST-*.xml file (likely timed out) 
(batchId=195)
TestDatabases - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDateColumnVector - did not produce a TEST-*.xml file (likely timed out) 
(batchId=195)
TestDeadline - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestDefaultConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDropPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=274)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=229)
TestExchangePartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestFMSketchSerialization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=238)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestForeignKey - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestFunctions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestGetPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestGetPartitionsUsingProjectionAndFilterSpecs - did not produce a TEST-*.xml 
file (likely timed out) (batchId=230)
TestGetTableMeta - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestGroupFilter - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestHLLNoBias - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestHLLSerialization - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestHdfsUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
TestHiveAlterHandler - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestHiveMetaStoreGetMetaConf - did not produce a TEST-*.xml file (likely timed 
out) (batchId=237)
TestHiveMetaStorePartitionSpecs - did not produce a TEST-*.xml file (likely 
timed out) (batchId=228)
TestHiveMetaStoreSchemaMethods - did not produce a TEST-*.xml file (likely 
timed out) (batchId=235)
TestHiveMetaStoreTimeout - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=237)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=232)
TestHiveMetaToolCommandLine - did not produce a TEST-*.xml file (likely timed 
out) (batchId=230)
TestHiveMetastoreCli - did not produce a TEST-*.xml file (likely timed out) 
(batchId=226)
TestHmsServerAuthorization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=235)
TestHyperLogLog - did not

[jira] [Commented] (HIVE-22539) HiveServer2 SPNEGO authentication should skip if authorization header is empty

2020-03-18 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061808#comment-17061808
 ] 

Kevin Risden commented on HIVE-22539:
-

Thanks Zoltan!

> HiveServer2 SPNEGO authentication should skip if authorization header is empty
> --
>
> Key: HIVE-22539
> URL: https://issues.apache.org/jira/browse/HIVE-22539
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22539.1.patch, HIVE-22539.2.patch, 
> HIVE-22539.3.patch, HIVE-22539.4.patch, HIVE-22539.5.patch, 
> HIVE-22539.6.patch, HIVE-22539.7.patch
>
>
> Currently HiveServer2 SPNEGO authentication waits until setting up Kerberos 
> before checking header. This can be checked up front to avoid doing any 
> Kerberos related work if the header is empty. This is helpful in a lot of 
> cases since typically the first request is empty with the client waiting for 
> a 401 before returning the Authorization header.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22841) ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner IllegalArgumentException on invalid cookie signature

2020-03-18 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061806#comment-17061806
 ] 

Kevin Risden commented on HIVE-22841:
-

Thanks Zoltan!

> ThriftHttpServlet#getClientNameFromCookie should handle CookieSigner 
> IllegalArgumentException on invalid cookie signature
> -
>
> Key: HIVE-22841
> URL: https://issues.apache.org/jira/browse/HIVE-22841
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22841.1.patch, HIVE-22841.2.patch, 
> HIVE-22841.3.patch
>
>
> Currently CookieSigner throws an IllegalArgumentException if the cookie 
> signature is invalid. 
> {code:java}
> if (!MessageDigest.isEqual(originalSignature.getBytes(), 
> currentSignature.getBytes())) {
>   throw new IllegalArgumentException("Invalid sign, original = " + 
> originalSignature +
> " current = " + currentSignature);
> }
> {code}
> CookieSigner is only used in the ThriftHttpServlet#getClientNameFromCookie 
> and doesn't handle the IllegalArgumentException. It is only checking if the 
> value from the cookie is null or not.
> https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java#L295
> {code:java}
>   currValue = signer.verifyAndExtract(currValue);
>   // Retrieve the user name, do the final validation step.
>   if (currValue != null) {
> {code}
> This should be fixed to either:
> a) Have CookieSigner not return an IllegalArgumentException
> b) Improve ThriftHttpServlet to handle CookieSigner throwing an 
> IllegalArgumentException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-23032:
--
Attachment: (was: HIVE-23032.1.patch)

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-23032:
--
Attachment: HIVE-23032.1.patch

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-23032:
--
Attachment: HIVE-23032.1.patch

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-23032:
--
Attachment: (was: HIVE-23032.1.patch)

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23032) Add batching in Lock generation

2020-03-18 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-23032:
--
Status: Patch Available  (was: Open)

> Add batching in Lock generation
> ---
>
> Key: HIVE-23032
> URL: https://issues.apache.org/jira/browse/HIVE-23032
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-23032.1.patch
>
>
> Replace multi-row insert in Oracle with batching. Performance tests showed 
> significant performance improvement after turning batching on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23044) Make sure Cleaner doesn't delete delta directories for running queries

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061761#comment-17061761
 ] 

Hive QA commented on HIVE-23044:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 14s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-21160/patches/PreCommit-HIVE-Build-21160.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21160/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Make sure Cleaner doesn't delete delta directories for running queries
> --
>
> Key: HIVE-23044
> URL: https://issues.apache.org/jira/browse/HIVE-23044
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Major
> Attachments: HIVE-23044.1.branch-3.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061755#comment-17061755
 ] 

Hive QA commented on HIVE-22997:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12997017/HIVE-22997.11.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18118 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestScheduledReplicationScenarios.testExternalTablesReplLoadBootstrapIncr
 (batchId=270)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21159/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21159/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21159/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12997017 - PreCommit-HIVE-Build

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23048) Use sequences for TXN_ID generation

2020-03-18 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-23048:
-


> Use sequences for TXN_ID generation
> ---
>
> Key: HIVE-23048
> URL: https://issues.apache.org/jira/browse/HIVE-23048
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22997:
--
Attachment: (was: HIVE-22997.12.patch)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22997:
--
Attachment: HIVE-22997.12.patch

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23042) Merge queries to a single one for updating MIN_OPEN_TXNS table

2020-03-18 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061750#comment-17061750
 ] 

Peter Vary commented on HIVE-23042:
---

I thought that we should handle batching as a different topic.
This fix is a clear win performance wise, but batching has some question marks, 
we have to decide or improve on to be a clear winner.

Your thoughts?

> Merge queries to a single one for updating MIN_OPEN_TXNS table
> --
>
> Key: HIVE-23042
> URL: https://issues.apache.org/jira/browse/HIVE-23042
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23042.patch
>
>
> When opening a new transaction we issue 2 queries to update the MIN_OPEN_TXN 
> table.
> {code}
> 
>  values(763, 763)>
> {code}
> This could be archived with a single query faster, if we do not open 
> transactions in batch, like:
> {code}
>SELECT ?, MIN("TXN_ID") FROM "TXNS" WHERE "TXN_STATE" = 'o'>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22997:
--
Attachment: HIVE-22997.12.patch

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.12.patch, HIVE-22997.2.patch, HIVE-22997.4.patch, 
> HIVE-22997.5.patch, HIVE-22997.6.patch, HIVE-22997.7.patch, 
> HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23047) Calculate the epoch on DB side

2020-03-18 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-23047:
--
Attachment: HIVE-23047.patch

> Calculate the epoch on DB side
> --
>
> Key: HIVE-23047
> URL: https://issues.apache.org/jira/browse/HIVE-23047
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23047.patch
>
>
> We use TxnHandler.getDbTime to calculate the epoch on the DB server, and 
> immediately insert the value back again. We would be better of by using sql 
> to calculate the value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23047) Calculate the epoch on DB side

2020-03-18 Thread Peter Vary (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-23047:
--
Status: Patch Available  (was: Open)

> Calculate the epoch on DB side
> --
>
> Key: HIVE-23047
> URL: https://issues.apache.org/jira/browse/HIVE-23047
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23047.patch
>
>
> We use TxnHandler.getDbTime to calculate the epoch on the DB server, and 
> immediately insert the value back again. We would be better of by using sql 
> to calculate the value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23042) Merge queries to a single one for updating MIN_OPEN_TXNS table

2020-03-18 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061744#comment-17061744
 ] 

Denys Kuzmenko commented on HIVE-23042:
---

[~pvary], why not use batching (at least for Postgres/Oracle)? Anyways you 
already have an "if".

> Merge queries to a single one for updating MIN_OPEN_TXNS table
> --
>
> Key: HIVE-23042
> URL: https://issues.apache.org/jira/browse/HIVE-23042
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23042.patch
>
>
> When opening a new transaction we issue 2 queries to update the MIN_OPEN_TXN 
> table.
> {code}
> 
>  values(763, 763)>
> {code}
> This could be archived with a single query faster, if we do not open 
> transactions in batch, like:
> {code}
>SELECT ?, MIN("TXN_ID") FROM "TXNS" WHERE "TXN_STATE" = 'o'>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-23042) Merge queries to a single one for updating MIN_OPEN_TXNS table

2020-03-18 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061744#comment-17061744
 ] 

Denys Kuzmenko edited comment on HIVE-23042 at 3/18/20, 1:52 PM:
-

[~pvary], why not use batching (at least for Postgres/Oracle)? Anyways you 
already have an "if".
{code}
if (txnIds.size() == 1)
{code}


was (Author: dkuzmenko):
[~pvary], why not use batching (at least for Postgres/Oracle)? Anyways you 
already have an "if".

> Merge queries to a single one for updating MIN_OPEN_TXNS table
> --
>
> Key: HIVE-23042
> URL: https://issues.apache.org/jira/browse/HIVE-23042
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-23042.patch
>
>
> When opening a new transaction we issue 2 queries to update the MIN_OPEN_TXN 
> table.
> {code}
> 
>  values(763, 763)>
> {code}
> This could be archived with a single query faster, if we do not open 
> transactions in batch, like:
> {code}
>SELECT ?, MIN("TXN_ID") FROM "TXNS" WHERE "TXN_STATE" = 'o'>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23044) Make sure Cleaner doesn't delete delta directories for running queries

2020-03-18 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061742#comment-17061742
 ] 

Peter Vary commented on HIVE-23044:
---

+1

> Make sure Cleaner doesn't delete delta directories for running queries
> --
>
> Key: HIVE-23044
> URL: https://issues.apache.org/jira/browse/HIVE-23044
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Zoltan Chovan
>Assignee: Zoltan Chovan
>Priority: Major
> Attachments: HIVE-23044.1.branch-3.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061716#comment-17061716
 ] 

Hive QA commented on HIVE-22997:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
48s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 1 new + 79 unchanged - 2 fixed 
= 80 total (was 81) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 649 
unchanged - 1 fixed = 649 total (was 650) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
48s{color} | {color:red} ql generated 1 new + 1530 unchanged - 1 fixed = 1531 
total (was 1531) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  org.apache.hadoop.hive.ql.exec.repl.ReplDumpWork is Serializable; 
consider declaring a serialVersionUID  At ReplDumpWork.java:a serialVersionUID  
At ReplDumpWork.java:[lines 39-119] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21159/dev-support/hive-personality.sh
 |
| git revision | master / 88f8c80 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21159/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21159/yetus/new-findbugs-ql.html
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21159/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.

[jira] [Commented] (HIVE-22955) PreUpgradeTool can fail because access to CharsetDecoder is not synchronized

2020-03-18 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17061680#comment-17061680
 ] 

Hive QA commented on HIVE-22955:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12996996/HIVE-22955.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18118 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21158/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21158/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21158/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12996996 - PreCommit-HIVE-Build

> PreUpgradeTool can fail because access to CharsetDecoder is not synchronized
> 
>
> Key: HIVE-22955
> URL: https://issues.apache.org/jira/browse/HIVE-22955
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Hankó Gergely
>Assignee: Hankó Gergely
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22955.1.patch, HIVE-22955.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2020-02-26 20:22:49,683 ERROR [main] acid.PreUpgradeTool 
> (PreUpgradeTool.java:main(150)) - PreUpgradeTool failed 
> org.apache.hadoop.hive.ql.metadata.HiveException at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.prepareAcidUpgradeInternal(PreUpgradeTool.java:283)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.main(PreUpgradeTool.java:146)
>  Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> java.lang.RuntimeException: java.lang.RuntimeException: 
> java.lang.IllegalStateException: Current state = RESET, new state = FLUSHED
> ...
> Caused by: java.lang.IllegalStateException: Current state = RESET, new state 
> = FLUSHED at 
> java.nio.charset.CharsetDecoder.throwIllegalStateException(CharsetDecoder.java:992)
>  at java.nio.charset.CharsetDecoder.flush(CharsetDecoder.java:675) at 
> java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:804) at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:606)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:567)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.getCompactionCommands(PreUpgradeTool.java:464)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.processTable(PreUpgradeTool.java:374)
> {code}
> This is probably caused by HIVE-21948.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=405386&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405386
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 12:27
Start Date: 18/Mar/20 12:27
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r394308202
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestScheduledReplicationScenarios.java
 ##
 @@ -151,6 +151,70 @@ public void testAcidTablesReplLoadBootstrapIncr() throws 
Throwable {
   .run("drop table t1");
 
 
+} finally {
+  primary.run("drop scheduled query s1");
+  replica.run("drop scheduled query s2");
+}
+  }
+
+  @Test
+  public void testExternalTablesReplLoadBootstrapIncr() throws Throwable {
+// Bootstrap
+primary.run("use " + primaryDbName)
+.run("create external table t1 (id int)")
+.run("insert into t1 values(1)")
+.run("insert into t1 values(2)");
+try (ScheduledQueryExecutionService schqS =
+ 
ScheduledQueryExecutionService.startScheduledQueryExecutorService(primary.hiveConf))
 {
+  int next = 0;
+  ReplDumpWork.injectNextDumpDirForTest(String.valueOf(next));
+  primary.run("create scheduled query s1 every 10 minutes as repl dump " + 
primaryDbName);
+  primary.run("alter scheduled query s1 execute");
+  Thread.sleep(2);
 
 Review comment:
   Ok
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405386)
Time Spent: 5.5h  (was: 5h 20m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.2.patch, HIVE-22997.4.patch, HIVE-22997.5.patch, 
> HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=405385&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405385
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 12:26
Start Date: 18/Mar/20 12:26
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r394307963
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java
 ##
 @@ -78,22 +81,26 @@
 import java.io.IOException;
 import java.io.Serializable;
 import java.nio.charset.StandardCharsets;
+import java.util.Iterator;
 import java.util.Set;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.Base64;
-import java.util.ArrayList;
+import java.util.LinkedList;
 import java.util.UUID;
+import java.util.ArrayList;
 import java.util.concurrent.TimeUnit;
 import static org.apache.hadoop.hive.ql.exec.repl.ReplExternalTables.Writer;
 
 public class ReplDumpTask extends Task implements Serializable {
+  private static final long serialVersionUID = 1L;
   private static final String dumpSchema = 
"dump_dir,last_repl_id#string,string";
   private static final String FUNCTION_METADATA_FILE_NAME = 
EximUtil.METADATA_NAME;
   private static final long SLEEP_TIME = 6;
   Set tablesForBootstrap = new HashSet<>();
+  private Path dumpAckFile;
 
 Review comment:
   We don't store the dumpath anywhere. Across the multiple call this info will 
be lost. I guess, ReplDumpWork is a better place for this info.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405385)
Time Spent: 5h 20m  (was: 5h 10m)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.2.patch, HIVE-22997.4.patch, HIVE-22997.5.patch, 
> HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22997) Copy external table to target during Repl Dump operation

2020-03-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22997?focusedWorklogId=405384&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-405384
 ]

ASF GitHub Bot logged work on HIVE-22997:
-

Author: ASF GitHub Bot
Created on: 18/Mar/20 12:25
Start Date: 18/Mar/20 12:25
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #951: HIVE-22997 
: Copy external table to target during Repl Dump operation
URL: https://github.com/apache/hive/pull/951#discussion_r394307271
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplDumpTask.java
 ##
 @@ -662,17 +690,48 @@ void dumpTable(String dbName, String tblName, String 
validTxnList, Path dbRoot,
 replLogger.tableLog(tblName, tableSpec.tableHandle.getTableType());
 if (tableSpec.tableHandle.getTableType().equals(TableType.EXTERNAL_TABLE)
 || Utils.shouldDumpMetaDataOnly(conf)) {
-  return;
+  return Collections.EMPTY_LIST;
+}
+return replPathMappings;
+  }
+
+  private void intitiateDataCopyTasks() {
+Iterator extCopyWorkItr = 
work.getDirCopyIterator();
+ReplOperationCompleteAckWork replDumpCompleteAckWork = new 
ReplOperationCompleteAckWork(dumpAckFile);
+Task dumpCompleteAckWorkTask = 
TaskFactory.get(replDumpCompleteAckWork, conf);
+List> childTasks = new ArrayList<>();
+int maxTasks = 
conf.getIntVar(HiveConf.ConfVars.REPL_APPROX_MAX_LOAD_TASKS);
+TaskTracker taskTracker = new TaskTracker(maxTasks);
+while (taskTracker.canAddMoreTasks() && hasMoreCopyWork()) {
+  if (work.replPathIteratorInitialized() && extCopyWorkItr.hasNext()) {
+childTasks.addAll(new ExternalTableCopyTaskBuilder(work, 
conf).tasks(taskTracker));
+  } else {
+childTasks.addAll(ReplPathMapping.tasks(work, taskTracker, conf));
+  }
 }
-for (ReplPathMapping replPathMapping: replPathMappings) {
-  Task copyTask = ReplCopyTask.getLoadCopyTask(
-  tuple.replicationSpec, replPathMapping.getSrcPath(), 
replPathMapping.getTargetPath(), conf, false);
-  this.addDependentTask(copyTask);
-  LOG.info("Scheduled a repl copy task from [{}] to [{}]",
-  replPathMapping.getSrcPath(), replPathMapping.getTargetPath());
+if (!childTasks.isEmpty()) {
+  boolean ackTaskAdded = false;
+  if (taskTracker.canAddMoreTasks()) {
+childTasks.add(dumpCompleteAckWorkTask);
+ackTaskAdded = true;
+  }
+  if (hasMoreCopyWork() || !ackTaskAdded) {
+DAGTraversal.traverse(childTasks, new 
AddDependencyToLeaves(TaskFactory.get(work, conf)));
 
 Review comment:
   Ack is being added at the end in these cases as well
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 405384)
Time Spent: 5h 10m  (was: 5h)

> Copy external table to target during Repl Dump operation
> 
>
> Key: HIVE-22997
> URL: https://issues.apache.org/jira/browse/HIVE-22997
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22997.03.patch, HIVE-22997.04.patch, 
> HIVE-22997.1.patch, HIVE-22997.10.patch, HIVE-22997.11.patch, 
> HIVE-22997.2.patch, HIVE-22997.4.patch, HIVE-22997.5.patch, 
> HIVE-22997.6.patch, HIVE-22997.7.patch, HIVE-22997.8.patch, HIVE-22997.9.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >