[jira] [Commented] (HIVE-22551) BytesColumnVector initBuffer should clean vector and length consistently

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984813#comment-16984813
 ] 

Hive QA commented on HIVE-22551:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987098/HIVE-22551.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 17817 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_insert1_overwrite_partitions]
 (batchId=2)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_insert2_overwrite_partitions]
 (batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_merge_dynamic_partition2]
 (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_merge_dynamic_partition3]
 (batchId=20)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_merge_dynamic_partition4]
 (batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_merge_dynamic_partition5]
 (batchId=20)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_merge_dynamic_partition]
 (batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_partition_boolexpr]
 (batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_partition_ctas]
 (batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[temp_table_partition_multilevels]
 (batchId=100)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[temp_table_llap_partitioned]
 (batchId=167)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19668/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19668/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19668/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987098 - PreCommit-HIVE-Build

> BytesColumnVector initBuffer should clean vector and length consistently 
> -
>
> Key: HIVE-22551
> URL: https://issues.apache.org/jira/browse/HIVE-22551
> Project: Hive
>  Issue Type: Bug
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Attachments: HIVE-22551.01.patch, HIVE-22551.01.patch, 
> HIVE-22551.01.patch
>
>
> VectorExtractRow relies on the fact that vector[i] and length[i] are 
> consistent within the BytesColumnVector, otherwise it throws exception:
> https://github.com/apache/hive/blob/edc53cc0d95e983c371a224943dd866210f0c65c/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExtractRow.java#L275
> There is a scenario when only vector[i] has been cleaned while reusing the 
> column vector, and then this kind of exception can be thrown:
> the reproduction was made with 
> [LlapDump|https://github.com/apache/hive/blob/master/llap-ext-client/src/java/org/apache/hadoop/hive/llap/LlapDump.java]
>  with String columns (longer than 16 chars)
> {code}
> 19/10/17 15:55:49 ERROR llap.LlapArrowRowRecordReader: Failed to fetch Arrow 
> batch
> java.lang.RuntimeException: STRING entry: batchIndex 45
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.BytesReadError(VectorExtractRow.java:488)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:294)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:193)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRow(VectorExtractRow.java:483)
> at 
> org.apache.hadoop.hive.ql.io.arrow.Deserializer.deserialize(Deserializer.java:125)
> at 
> org.apache.hadoop.hive.ql.io.arrow.ArrowColumnarBatchSerDe.deserialize(ArrowColumnarBatchSerDe.java:284)
> at 
> org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:75)
> at 
> org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41)
> at datareader.LlapDump.main(LlapDump.java:124)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-20150) TopNKey pushdown

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984814#comment-16984814
 ] 

Hive QA commented on HIVE-20150:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987118/HIVE-20150.17.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19669/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19669/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19669/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-11-29 08:38:23.362
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-19669/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-11-29 08:38:23.365
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at ab71e5a HIVE-22280: Q tests for partitioned temporary tables 
(Laszlo Pinter via Peter Vary)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at ab71e5a HIVE-22280: Q tests for partitioned temporary tables 
(Laszlo Pinter via Peter Vary)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-11-29 08:38:24.302
+ rm -rf ../yetus_PreCommit-HIVE-Build-19669
+ mkdir ../yetus_PreCommit-HIVE-Build-19669
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-19669
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-19669/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/TopNKeyProcessor.java:59
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/TopNKeyProcessor.java' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/perf/tez/constraints/query70.q.out:100
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/perf/tez/constraints/query70.q.out' cleanly.
error: patch failed: 
ql/src/test/results/clientpositive/perf/tez/query70.q.out:100
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/perf/tez/query70.q.out' 
cleanly.
Going to apply patch with: git apply -p0
/data/hiveptest/working/scratch/build.patch:3535: trailing whitespace.
Map 5 
/data/hiveptest/working/scratch/build.patch:3591: trailing whitespace.
Reducer 3 
/data/hiveptest/working/scratch/build.patch:3611: trailing whitespace.
Reducer 4 
/data/hiveptest/working/scratch/build.patch:3722: trailing whitespace.
Map 5 
/data/hiveptest/working/scratch/build.patch:3781: trailing whitespace.
Reducer 4 
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/TopNKeyProcessor.java:59
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/TopNKeyProcessor.java' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/perf/tez/constraints/query70.q.out:100
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/perf/tez/constraints/query70.q.out' cleanly.
error: patch failed: 
ql/src/test/results/clientpositive/perf/tez/query70.q.out:100
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/perf/tez/query70.q.out' 
cleanly.
U ql/src/java/org/apache/hadoop/hive/ql/optimizer/TopNKeyProcessor.java
warning: squelched 5 whitespace errors
warning: 10 lines add

[jira] [Updated] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-29 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21266:
-
Attachment: HIVE-21266.02.patch
Status: Patch Available  (was: Open)

> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21266.01.patch, HIVE-21266.02.patch
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn in {{markCleaned()}}.  Now aborted data will come back as committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-29 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21266:
-
Status: Open  (was: Patch Available)

> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21266.01.patch, HIVE-21266.02.patch
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn in {{markCleaned()}}.  Now aborted data will come back as committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22327) Repl: Ignore read-only transactions in notification log

2019-11-29 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22327:
--
Attachment: HIVE-22327.14.patch

> Repl: Ignore read-only transactions in notification log
> ---
>
> Key: HIVE-22327
> URL: https://issues.apache.org/jira/browse/HIVE-22327
> Project: Hive
>  Issue Type: Improvement
>  Components: repl
>Reporter: Gopal Vijayaraghavan
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22327.1.patch, HIVE-22327.10.patch, 
> HIVE-22327.11.patch, HIVE-22327.12.patch, HIVE-22327.13.patch, 
> HIVE-22327.14.patch, HIVE-22327.2.patch, HIVE-22327.3.patch, 
> HIVE-22327.4.patch, HIVE-22327.5.patch, HIVE-22327.6.patch, 
> HIVE-22327.7.patch, HIVE-22327.8.patch, HIVE-22327.9.patch
>
>
> Read txns need not be replicated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984834#comment-16984834
 ] 

Hive QA commented on HIVE-22544:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
14s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19670/dev-support/hive-personality.sh
 |
| git revision | master / ab71e5a |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19670/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22280) Q tests for partitioned temporary tables

2019-11-29 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984852#comment-16984852
 ] 

Peter Vary commented on HIVE-22280:
---

Reverted, because of a concurrent commit (HIVE-22481) caused tests to fail

> Q tests for partitioned temporary tables
> 
>
> Key: HIVE-22280
> URL: https://issues.apache.org/jira/browse/HIVE-22280
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22280.01.patch, HIVE-22280.02.patch, 
> HIVE-22280.03.patch, HIVE-22280.04.patch, HIVE-22280.05.patch, 
> HIVE-22280.06.patch, HIVE-22280.07.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22502) ConcurrentModificationException in TriggerValidatorRunnable stops trigger processing

2019-11-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22502:
-
Attachment: HIVE-22502.1.patch

> ConcurrentModificationException in TriggerValidatorRunnable stops trigger 
> processing
> 
>
> Key: HIVE-22502
> URL: https://issues.apache.org/jira/browse/HIVE-22502
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Attachments: HIVE-22502.1.patch
>
>
> An other thread is modifying the list the contains the sessions while 
> TriggerValidatorRunnable is traversing it. This causes the 
> TriggerValidatorRunnable thread to die and triggers are no longer properly 
> monitored.
>  
> {code:java}
> <12>1 2019-11-14T00:31:12.187Z 
> hiveserver2-0.hiveserver2-service.compute-1572769905-6965.svc.cluster.local 
> hiveserver2 1 fa2f30b6-ffb3-11e9-93ba-0a257c2413a2 [mdc@18060 
> class="tez.TriggerValidatorRunnable" level="WARN" thread="TriggerValidator"] 
> TriggerValidatorRunnable caught exception.<12>1 2019-11-14T00:31:12.187Z 
> hiveserver2-0.hiveserver2-service.compute-1572769905-6965.svc.cluster.local 
> hiveserver2 1 fa2f30b6-ffb3-11e9-93ba-0a257c2413a2 [mdc@18060 
> class="tez.TriggerValidatorRunnable" level="WARN" thread="TriggerValidator"] 
> TriggerValidatorRunnable caught 
> exception.java.util.ConcurrentModificationException at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966) at 
> java.util.LinkedList$ListItr.next(LinkedList.java:888) at 
> java.util.Collections$UnmodifiableCollection$1.next(Collections.java:1044) at 
> org.apache.hadoop.hive.ql.exec.tez.TriggerValidatorRunnable.run(TriggerValidatorRunnable.java:49)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)             {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22327) Repl: Ignore read-only transactions in notification log

2019-11-29 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984863#comment-16984863
 ] 

Denys Kuzmenko commented on HIVE-22327:
---

Unrelated failures caused by NULL sort order change.

> Repl: Ignore read-only transactions in notification log
> ---
>
> Key: HIVE-22327
> URL: https://issues.apache.org/jira/browse/HIVE-22327
> Project: Hive
>  Issue Type: Improvement
>  Components: repl
>Reporter: Gopal Vijayaraghavan
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22327.1.patch, HIVE-22327.10.patch, 
> HIVE-22327.11.patch, HIVE-22327.12.patch, HIVE-22327.13.patch, 
> HIVE-22327.14.patch, HIVE-22327.2.patch, HIVE-22327.3.patch, 
> HIVE-22327.4.patch, HIVE-22327.5.patch, HIVE-22327.6.patch, 
> HIVE-22327.7.patch, HIVE-22327.8.patch, HIVE-22327.9.patch
>
>
> Read txns need not be replicated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-29 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21266:
-
Status: Open  (was: Patch Available)

> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21266.01.patch, HIVE-21266.02.patch
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn in {{markCleaned()}}.  Now aborted data will come back as committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-29 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21266:
-
Attachment: HIVE-21266.02.patch
Status: Patch Available  (was: Open)

> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21266.01.patch, HIVE-21266.02.patch, 
> HIVE-21266.02.patch
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn in {{markCleaned()}}.  Now aborted data will come back as committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984869#comment-16984869
 ] 

Hive QA commented on HIVE-22544:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987122/HIVE-22544.4.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 17818 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.TestTxnCommands.testMergeOnTezEdges (batchId=358)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.org.apache.hadoop.hive.ql.TestWarehouseExternalDir
 (batchId=279)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testExternalDefaultPaths 
(batchId=279)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19670/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19670/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19670/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987122 - PreCommit-HIVE-Build

> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21050) Upgrade Parquet to 1.11.0 and use LogicalTypes

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984871#comment-16984871
 ] 

Hive QA commented on HIVE-21050:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987123/HIVE-21050.6.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19671/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19671/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19671/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-11-29 10:06:20.002
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-19671/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-11-29 10:06:20.006
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   ab71e5a..b9bdbed  master -> origin/master
+ git reset --hard HEAD
HEAD is now at ab71e5a HIVE-22280: Q tests for partitioned temporary tables 
(Laszlo Pinter via Peter Vary)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/master
HEAD is now at b9bdbed Revert "HIVE-22280: Q tests for partitioned temporary 
tables (Laszlo Pinter via Peter Vary)"
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-11-29 10:06:22.004
+ rm -rf ../yetus_PreCommit-HIVE-Build-19671
+ mkdir ../yetus_PreCommit-HIVE-Build-19671
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-19671
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-19671/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: pom.xml:234
Falling back to three-way merge...
Applied patch to 'pom.xml' cleanly.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java:16
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/convert/ETypeConverter.java' 
cleanly.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/ParquetDataColumnReaderFactory.java:21
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/io/parquet/vector/ParquetDataColumnReaderFactory.java'
 cleanly.
error: patch failed: ql/src/test/results/clientpositive/parquet_analyze.q.out:94
Falling back to three-way merge...
Applied patch to 'ql/src/test/results/clientpositive/parquet_analyze.q.out' 
with conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/parquet_vectorization_0.q.out:1697
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/parquet_vectorization_0.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/spark/parquet_vectorization_0.q.out:1713
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/spark/parquet_vectorization_0.q.out' with 
conflicts.
Going to apply patch with: git apply -p0
/data/hiveptest/working/scratch/build.patch:1671: trailing whitespace.
totalSize   6927
/data/hiveptest/working/scratch/build.patch:1681: trailing whitespace.
rawDataSize 5774
/data/hiveptest/working/scratch/build.patch:1682: trailing whitespace.
totalSize   6927
error: patch failed: pom.xml:234
Falling back to three-way merge...
Applied 

[jira] [Assigned] (HIVE-19358) CBO decorrelation logic should generate Hive operators

2019-11-29 Thread AK97 (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AK97 reassigned HIVE-19358:
---

Assignee: AK97  (was: Jesus Camacho Rodriguez)

> CBO decorrelation logic should generate Hive operators
> --
>
> Key: HIVE-19358
> URL: https://issues.apache.org/jira/browse/HIVE-19358
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: AK97
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19358.01.patch, HIVE-19358.02.patch, 
> HIVE-19358.03.patch, HIVE-19358.04.patch, HIVE-19358.05.patch, 
> HIVE-19358.patch, fix.patch
>
>
> Decorrelation logic may generate logical instances of the operators in the 
> plan (e.g., LogicalFilter instead of HiveFilter). This leads to errors while 
> costing the tree in the Volcano planner (used in MV rewriting), since logical 
> operators do not have a cost associated to them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-19358) CBO decorrelation logic should generate Hive operators

2019-11-29 Thread AK97 (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AK97 reassigned HIVE-19358:
---

Assignee: (was: AK97)

> CBO decorrelation logic should generate Hive operators
> --
>
> Key: HIVE-19358
> URL: https://issues.apache.org/jira/browse/HIVE-19358
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19358.01.patch, HIVE-19358.02.patch, 
> HIVE-19358.03.patch, HIVE-19358.04.patch, HIVE-19358.05.patch, 
> HIVE-19358.patch, fix.patch
>
>
> Decorrelation logic may generate logical instances of the operators in the 
> plan (e.g., LogicalFilter instead of HiveFilter). This leads to errors while 
> costing the tree in the Volcano planner (used in MV rewriting), since logical 
> operators do not have a cost associated to them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22544:
--
Attachment: HIVE-22544.4.patch

> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch, HIVE-22544.4.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22544:
--
Status: Open  (was: Patch Available)

> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch, HIVE-22544.4.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22544:
--
Status: Patch Available  (was: Open)

> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch, HIVE-22544.4.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22327) Repl: Ignore read-only transactions in notification log

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984896#comment-16984896
 ] 

Hive QA commented on HIVE-22327:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
18s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
20s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 4 new + 563 unchanged - 3 fixed = 567 total (was 566) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 1 new + 33 unchanged - 0 fixed 
= 34 total (was 33) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch server-extensions passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} itests/hcatalog-unit: The patch generated 0 new + 17 
unchanged - 1 fixed = 17 total (was 18) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} standalone-metastore/metastore-server generated 0 
new + 178 unchanged - 1 fixed = 178 total (was 179) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
29s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} server-extensions in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-

[jira] [Updated] (HIVE-21050) Upgrade Parquet to 1.11.0 and use LogicalTypes

2019-11-29 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21050:
-
Status: Open  (was: Patch Available)

> Upgrade Parquet to 1.11.0 and use LogicalTypes
> --
>
> Key: HIVE-21050
> URL: https://issues.apache.org/jira/browse/HIVE-21050
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: Parquet, parquet
> Attachments: HIVE-21050.1.patch, HIVE-21050.1.patch, 
> HIVE-21050.1.patch, HIVE-21050.2.patch, HIVE-21050.3.patch, 
> HIVE-21050.4.patch, HIVE-21050.4.patch, HIVE-21050.4.patch, 
> HIVE-21050.5.patch, HIVE-21050.5.patch, HIVE-21050.5.patch, 
> HIVE-21050.6.patch, HIVE-21050.6.patch, HIVE-21050.6.patch.txt
>
>
> [WIP until Parquet community releases version 1.11.0]
> The new Parquet version (1.11.0) uses 
> [LogicalTypes|https://github.com/apache/parquet-format/blob/master/LogicalTypes.md]
>  instead of OriginalTypes.
>  These are backwards-compatible with OriginalTypes.
> Thanks to [~kuczoram] for her work on this patch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21050) Upgrade Parquet to 1.11.0 and use LogicalTypes

2019-11-29 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21050:
-
Attachment: HIVE-21050.7.patch
Status: Patch Available  (was: Open)

> Upgrade Parquet to 1.11.0 and use LogicalTypes
> --
>
> Key: HIVE-21050
> URL: https://issues.apache.org/jira/browse/HIVE-21050
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: Parquet, parquet
> Attachments: HIVE-21050.1.patch, HIVE-21050.1.patch, 
> HIVE-21050.1.patch, HIVE-21050.2.patch, HIVE-21050.3.patch, 
> HIVE-21050.4.patch, HIVE-21050.4.patch, HIVE-21050.4.patch, 
> HIVE-21050.5.patch, HIVE-21050.5.patch, HIVE-21050.5.patch, 
> HIVE-21050.6.patch, HIVE-21050.6.patch, HIVE-21050.6.patch.txt, 
> HIVE-21050.7.patch
>
>
> [WIP until Parquet community releases version 1.11.0]
> The new Parquet version (1.11.0) uses 
> [LogicalTypes|https://github.com/apache/parquet-format/blob/master/LogicalTypes.md]
>  instead of OriginalTypes.
>  These are backwards-compatible with OriginalTypes.
> Thanks to [~kuczoram] for her work on this patch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22327) Repl: Ignore read-only transactions in notification log

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984915#comment-16984915
 ] 

Hive QA commented on HIVE-22327:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987133/HIVE-22327.14.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17756 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testOpenTxnEvent
 (batchId=271)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTables.testOpenTxnEvent
 (batchId=273)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19672/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19672/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19672/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987133 - PreCommit-HIVE-Build

> Repl: Ignore read-only transactions in notification log
> ---
>
> Key: HIVE-22327
> URL: https://issues.apache.org/jira/browse/HIVE-22327
> Project: Hive
>  Issue Type: Improvement
>  Components: repl
>Reporter: Gopal Vijayaraghavan
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22327.1.patch, HIVE-22327.10.patch, 
> HIVE-22327.11.patch, HIVE-22327.12.patch, HIVE-22327.13.patch, 
> HIVE-22327.14.patch, HIVE-22327.2.patch, HIVE-22327.3.patch, 
> HIVE-22327.4.patch, HIVE-22327.5.patch, HIVE-22327.6.patch, 
> HIVE-22327.7.patch, HIVE-22327.8.patch, HIVE-22327.9.patch
>
>
> Read txns need not be replicated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22563) Required field 'client_protocol' is unset (hive server backward compatibility)

2019-11-29 Thread Nitin (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin reassigned HIVE-22563:


Assignee: (was: Nitin)

> Required field 'client_protocol' is unset (hive server backward compatibility)
> --
>
> Key: HIVE-22563
> URL: https://issues.apache.org/jira/browse/HIVE-22563
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 2.3.6
>Reporter: ZhangZhiCheng
>Priority: Blocker
> Attachments: image-2019-11-29-10-20-46-982.png
>
>
> I try to connect to hive server 1.2.1 by using hive client 
> (hive-jdbc-2.3.6.jar) , then I got this issue "Required field 
> 'client_protocol' is unset".  Is that means hive server 1.2.1 has no backward 
> compatibility for newer hive client version?   
> !image-2019-11-29-10-20-46-982.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22563) Required field 'client_protocol' is unset (hive server backward compatibility)

2019-11-29 Thread Nitin (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nitin reassigned HIVE-22563:


Assignee: Nitin

> Required field 'client_protocol' is unset (hive server backward compatibility)
> --
>
> Key: HIVE-22563
> URL: https://issues.apache.org/jira/browse/HIVE-22563
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 2.3.6
>Reporter: ZhangZhiCheng
>Assignee: Nitin
>Priority: Blocker
> Attachments: image-2019-11-29-10-20-46-982.png
>
>
> I try to connect to hive server 1.2.1 by using hive client 
> (hive-jdbc-2.3.6.jar) , then I got this issue "Required field 
> 'client_protocol' is unset".  Is that means hive server 1.2.1 has no backward 
> compatibility for newer hive client version?   
> !image-2019-11-29-10-20-46-982.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21954) QTest: support for running qtests on various metastore DBs

2019-11-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-21954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-21954:

Attachment: HIVE-21954.10.patch

> QTest: support for running qtests on various metastore DBs
> --
>
> Key: HIVE-21954
> URL: https://issues.apache.org/jira/browse/HIVE-21954
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore, Testing Infrastructure
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21954.01.patch, HIVE-21954.02.patch, 
> HIVE-21954.03.patch, HIVE-21954.03.patch, HIVE-21954.03.patch, 
> HIVE-21954.04.patch, HIVE-21954.05.patch, HIVE-21954.07.patch, 
> HIVE-21954.07.patch, HIVE-21954.08.patch, HIVE-21954.09.patch, 
> HIVE-21954.10.patch, HIVE-21954.10.patch
>
>
> In HIVE-21940, a postgres metastore related issue has been fixed, and a local 
> reproduction has been provided.
> {code}
> export QTEST_LEAVE_FILES=true
> docker kill metastore-test-postgres-install
> docker rm metastore-test-postgres-install
> cd standalone-metastore
> mvn verify -DskipITests=false -Dit.test=ITestPostgres#install -Dtest=nosuch 
> -Dmetastore.itest.no.stop.container=true
> cd ..
> mvn test -Dtest.output.overwrite=true -Pitests,hadoop-2 -pl itests/qtest 
> -Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
> -Dhive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
> {code}
> The problem with this solution is that data/conf/hive-site.xml has to be 
> edited manually. My proposal is to introduce a property 
> (-Dmetastore.db=postgres), which can take care of the parameters on the fly. 
> 2 supported solutions could be:
> 1. simple parameters: -Dmetastore.db=postgres
> In this case, tests depend on settings from ITestPostgres class (password, 
> db, etc.)
> 2. verbose but flexible parameters: [see hive-site.xml HIVE-21940's repro 
> patch|https://issues.apache.org/jira/secure/attachment/12973534/HIVE-21940.repro.patch]
>  
> In the first implementation, I would not start metastore db automatically 
> (which is done be 'mvn verify ...'), but it's still under planning. 
> In the long term, we should consider running this kind of tests in precommit 
> phase, so maybe -Dmetastore.db=postgres could start metastore db 
> automatically. Also we should consider running some qtests on various 
> metastores. I would not pick randomly, but choose some "metastore-heavy" ones 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22551) BytesColumnVector initBuffer should clean vector and length consistently

2019-11-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-22551:

Attachment: HIVE-22551.01.patch

> BytesColumnVector initBuffer should clean vector and length consistently 
> -
>
> Key: HIVE-22551
> URL: https://issues.apache.org/jira/browse/HIVE-22551
> Project: Hive
>  Issue Type: Bug
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Attachments: HIVE-22551.01.patch, HIVE-22551.01.patch, 
> HIVE-22551.01.patch, HIVE-22551.01.patch
>
>
> VectorExtractRow relies on the fact that vector[i] and length[i] are 
> consistent within the BytesColumnVector, otherwise it throws exception:
> https://github.com/apache/hive/blob/edc53cc0d95e983c371a224943dd866210f0c65c/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExtractRow.java#L275
> There is a scenario when only vector[i] has been cleaned while reusing the 
> column vector, and then this kind of exception can be thrown:
> the reproduction was made with 
> [LlapDump|https://github.com/apache/hive/blob/master/llap-ext-client/src/java/org/apache/hadoop/hive/llap/LlapDump.java]
>  with String columns (longer than 16 chars)
> {code}
> 19/10/17 15:55:49 ERROR llap.LlapArrowRowRecordReader: Failed to fetch Arrow 
> batch
> java.lang.RuntimeException: STRING entry: batchIndex 45
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.BytesReadError(VectorExtractRow.java:488)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:294)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:193)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRow(VectorExtractRow.java:483)
> at 
> org.apache.hadoop.hive.ql.io.arrow.Deserializer.deserialize(Deserializer.java:125)
> at 
> org.apache.hadoop.hive.ql.io.arrow.ArrowColumnarBatchSerDe.deserialize(ArrowColumnarBatchSerDe.java:284)
> at 
> org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:75)
> at 
> org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41)
> at datareader.LlapDump.main(LlapDump.java:124)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984943#comment-16984943
 ] 

Hive QA commented on HIVE-21266:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
21s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
49s{color} | {color:red} ql: The patch generated 5 new + 592 unchanged - 2 
fixed = 597 total (was 594) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19673/dev-support/hive-personality.sh
 |
| git revision | master / b9bdbed |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19673/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19673/yetus/whitespace-eol.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19673/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21266.01.patch, HIVE-21266.02.patch, 
> HIVE-21266.02.patch
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn

[jira] [Commented] (HIVE-22547) Review txn compactor Package

2019-11-29 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984958#comment-16984958
 ] 

Peter Vary commented on HIVE-22547:
---

CC: [~lpinter]

> Review txn compactor Package
> 
>
> Key: HIVE-22547
> URL: https://issues.apache.org/jira/browse/HIVE-22547
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HIVE-22547.1.patch
>
>
> * Remove log-and-throw anti-pattern
> * Use parameterized logging
> * Add a CompactionException class to improve debug-ability
> * Introduce Java Optional utility
> * Other clean up



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22521) Both Driver and SessionState has a userName

2019-11-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-22521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984962#comment-16984962
 ] 

László Bodor commented on HIVE-22521:
-

+1

> Both Driver and SessionState has a userName
> ---
>
> Key: HIVE-22521
> URL: https://issues.apache.org/jira/browse/HIVE-22521
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22521.01.patch, HIVE-22521.01.patch, 
> HIVE-22521.01.patch, HIVE-22521.01.patch, HIVE-22521.01.patch, 
> HIVE-22521.01.patch
>
>
> This caused some confusing behaviour to me...especially when the 2 values 
> were different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22554) ACID: Wait timeout for blocking compaction should be configurable

2019-11-29 Thread Laszlo Pinter (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-22554:
-
Attachment: HIVE-22554.02.patch

> ACID: Wait timeout for blocking compaction should be configurable
> -
>
> Key: HIVE-22554
> URL: https://issues.apache.org/jira/browse/HIVE-22554
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Minor
> Attachments: HIVE-22554.01.patch, HIVE-22554.02.patch
>
>
> The wait timeout for blocking compaction is hardcoded to 5 minutes. 
> {code:java}
> public class AlterTableCompactOperation extends 
> DDLOperation {
>   private static final int FIVE_MINUTES_IN_MILLIES = 5*60*1000;
> ...
> }{code}
> This should be configurable via a Hive Configuration parameter. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22327) Repl: Ignore read-only transactions in notification log

2019-11-29 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22327:
--
Attachment: HIVE-22327.15.patch

> Repl: Ignore read-only transactions in notification log
> ---
>
> Key: HIVE-22327
> URL: https://issues.apache.org/jira/browse/HIVE-22327
> Project: Hive
>  Issue Type: Improvement
>  Components: repl
>Reporter: Gopal Vijayaraghavan
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22327.1.patch, HIVE-22327.10.patch, 
> HIVE-22327.11.patch, HIVE-22327.12.patch, HIVE-22327.13.patch, 
> HIVE-22327.14.patch, HIVE-22327.15.patch, HIVE-22327.2.patch, 
> HIVE-22327.3.patch, HIVE-22327.4.patch, HIVE-22327.5.patch, 
> HIVE-22327.6.patch, HIVE-22327.7.patch, HIVE-22327.8.patch, HIVE-22327.9.patch
>
>
> Read txns need not be replicated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22562) Harmonize SessionState.getUserName

2019-11-29 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984977#comment-16984977
 ] 

Peter Vary commented on HIVE-22562:
---

[~kgyrtkirk]: When I was playing around with current_user and created 
logged_in_user UDF, I have found in some configurations only the current_user 
provided meaningful information, in other configurations only the 
logged_in_user. I am not sure, but I think it had to do something with the 
impersonation related settings. So please consider this when moving forward. 
(Sorry that I am not able to remember the exact details :()
 Thanks,
 Peter

> Harmonize SessionState.getUserName
> --
>
> Key: HIVE-22562
> URL: https://issues.apache.org/jira/browse/HIVE-22562
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> we might have 2 different user names at the same time:
> * 
> [getUserName()|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L1912]
> ** a method which relies on the userName field of the SessionState
> * 
> [getUserFromAuthenticator()|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L1291]
> ** a method which uses the authenticator to do the heavy lifting
> * there all kind of interesting call sites like:
> ** there are some which are [prefering the authenticator over 
> getUserName()|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L254]
> ** there are some which [use getUserName() regardless authenticator, but have 
> fixme|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/Driver.java#L1669]
> ** and there are some which are just using the authenticator with or without 
> notes/etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-22562) Harmonize SessionState.getUserName

2019-11-29 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984977#comment-16984977
 ] 

Peter Vary edited comment on HIVE-22562 at 11/29/19 12:39 PM:
--

[~kgyrtkirk]: When I was playing around with current_user and created 
logged_in_user UDF (HIVE-14100), I have found in some configurations only the 
current_user provided meaningful information, in other configurations only the 
logged_in_user. I am not sure, but I think it had to do something with the 
impersonation related settings. So please consider this when moving forward. 
(Sorry that I am not able to remember the exact details :()
 Thanks,
 Peter


was (Author: pvary):
[~kgyrtkirk]: When I was playing around with current_user and created 
logged_in_user UDF, I have found in some configurations only the current_user 
provided meaningful information, in other configurations only the 
logged_in_user. I am not sure, but I think it had to do something with the 
impersonation related settings. So please consider this when moving forward. 
(Sorry that I am not able to remember the exact details :()
 Thanks,
 Peter

> Harmonize SessionState.getUserName
> --
>
> Key: HIVE-22562
> URL: https://issues.apache.org/jira/browse/HIVE-22562
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> we might have 2 different user names at the same time:
> * 
> [getUserName()|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L1912]
> ** a method which relies on the userName field of the SessionState
> * 
> [getUserFromAuthenticator()|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L1291]
> ** a method which uses the authenticator to do the heavy lifting
> * there all kind of interesting call sites like:
> ** there are some which are [prefering the authenticator over 
> getUserName()|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L254]
> ** there are some which [use getUserName() regardless authenticator, but have 
> fixme|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/Driver.java#L1669]
> ** and there are some which are just using the authenticator with or without 
> notes/etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22536) Improve return path enabling/disabling

2019-11-29 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22536:
--
Attachment: HIVE-22536.03.patch

> Improve return path enabling/disabling
> --
>
> Key: HIVE-22536
> URL: https://issues.apache.org/jira/browse/HIVE-22536
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22536.01.patch, HIVE-22536.02.patch, 
> HIVE-22536.03.patch
>
>
> Instead of having a boolean for hive.cbo.returnpath.hiveop it should be 
> on/off/supported. In case of "supported" it should be used for a subset of 
> commands which are already verified to be able to work with return path. This 
> is a temporary solution for the time while we are developing return path, 
> before making it the only way to handle commands.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985003#comment-16985003
 ] 

Hive QA commented on HIVE-21266:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987145/HIVE-21266.02.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 17753 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver[url_hook] 
(batchId=296)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMetastoreTablesCleanup 
(batchId=355)
org.apache.hive.minikdc.TestSSLWithMiniKdc.testConnection (batchId=302)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19673/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19673/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19673/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987145 - PreCommit-HIVE-Build

> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21266.01.patch, HIVE-21266.02.patch, 
> HIVE-21266.02.patch
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn in {{markCleaned()}}.  Now aborted data will come back as committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22557) Break up DDLSemanticAnalyzer - extract Table constraints analyzers

2019-11-29 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22557:
--
Attachment: (was: HIVE-22557.01.patch)

> Break up DDLSemanticAnalyzer - extract Table constraints analyzers
> --
>
> Key: HIVE-22557
> URL: https://issues.apache.org/jira/browse/HIVE-22557
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22557.01.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #10: extract the table constraints related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22557) Break up DDLSemanticAnalyzer - extract Table constraints analyzers

2019-11-29 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22557:
--
Attachment: HIVE-22557.01.patch

> Break up DDLSemanticAnalyzer - extract Table constraints analyzers
> --
>
> Key: HIVE-22557
> URL: https://issues.apache.org/jira/browse/HIVE-22557
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22557.01.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #10: extract the table constraints related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985005#comment-16985005
 ] 

Hive QA commented on HIVE-22544:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987147/HIVE-22544.4.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19674/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19674/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19674/

Messages:
{noformat}
 This message was trimmed, see log for full details 
error: test/results/clientpositive/perf/tez/constraints/query25.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query26.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query27.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query28.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query29.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query3.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query30.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query31.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query32.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query33.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query34.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query35.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query36.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query37.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query38.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query39.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query4.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query40.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query42.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query43.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query44.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query45.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query46.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query47.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query48.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query49.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query5.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query50.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query51.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query52.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query53.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query54.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query55.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query56.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query57.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query58.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query59.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query6.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query60.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query61.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query63.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query64.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query65.q.out: does not 
exist in index
error: test/results/clientpositive/perf/tez/constraints/query66.q.out: does not 
exist in index
error: test/results/cli

[jira] [Updated] (HIVE-20150) TopNKey pushdown

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-20150:
--
Status: Open  (was: Patch Available)

> TopNKey pushdown
> 
>
> Key: HIVE-20150
> URL: https://issues.apache.org/jira/browse/HIVE-20150
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Affects Versions: 4.0.0
>Reporter: Teddy Choi
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20150.1.patch, HIVE-20150.10.patch, 
> HIVE-20150.11.patch, HIVE-20150.11.patch, HIVE-20150.14.patch, 
> HIVE-20150.15.patch, HIVE-20150.16.patch, HIVE-20150.17.patch, 
> HIVE-20150.17.patch, HIVE-20150.2.patch, HIVE-20150.4.patch, 
> HIVE-20150.5.patch, HIVE-20150.6.patch, HIVE-20150.7.patch, 
> HIVE-20150.8.patch, HIVE-20150.9.patch
>
>
> TopNKey operator is implemented in HIVE-17896, but it needs more work in 
> pushdown implementation. So this issue covers TopNKey pushdown implementation 
> with proper tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (HIVE-22280) Q tests for partitioned temporary tables

2019-11-29 Thread Laszlo Pinter (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter reopened HIVE-22280:
--

> Q tests for partitioned temporary tables
> 
>
> Key: HIVE-22280
> URL: https://issues.apache.org/jira/browse/HIVE-22280
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22280.01.patch, HIVE-22280.02.patch, 
> HIVE-22280.03.patch, HIVE-22280.04.patch, HIVE-22280.05.patch, 
> HIVE-22280.06.patch, HIVE-22280.07.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-20150) TopNKey pushdown

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-20150:
--
Attachment: HIVE-20150.18.patch

> TopNKey pushdown
> 
>
> Key: HIVE-20150
> URL: https://issues.apache.org/jira/browse/HIVE-20150
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Affects Versions: 4.0.0
>Reporter: Teddy Choi
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20150.1.patch, HIVE-20150.10.patch, 
> HIVE-20150.11.patch, HIVE-20150.11.patch, HIVE-20150.14.patch, 
> HIVE-20150.15.patch, HIVE-20150.16.patch, HIVE-20150.17.patch, 
> HIVE-20150.17.patch, HIVE-20150.18.patch, HIVE-20150.2.patch, 
> HIVE-20150.4.patch, HIVE-20150.5.patch, HIVE-20150.6.patch, 
> HIVE-20150.7.patch, HIVE-20150.8.patch, HIVE-20150.9.patch
>
>
> TopNKey operator is implemented in HIVE-17896, but it needs more work in 
> pushdown implementation. So this issue covers TopNKey pushdown implementation 
> with proper tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-20150) TopNKey pushdown

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-20150:
--
Status: Patch Available  (was: Open)

> TopNKey pushdown
> 
>
> Key: HIVE-20150
> URL: https://issues.apache.org/jira/browse/HIVE-20150
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Affects Versions: 4.0.0
>Reporter: Teddy Choi
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20150.1.patch, HIVE-20150.10.patch, 
> HIVE-20150.11.patch, HIVE-20150.11.patch, HIVE-20150.14.patch, 
> HIVE-20150.15.patch, HIVE-20150.16.patch, HIVE-20150.17.patch, 
> HIVE-20150.17.patch, HIVE-20150.18.patch, HIVE-20150.2.patch, 
> HIVE-20150.4.patch, HIVE-20150.5.patch, HIVE-20150.6.patch, 
> HIVE-20150.7.patch, HIVE-20150.8.patch, HIVE-20150.9.patch
>
>
> TopNKey operator is implemented in HIVE-17896, but it needs more work in 
> pushdown implementation. So this issue covers TopNKey pushdown implementation 
> with proper tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22280) Q tests for partitioned temporary tables

2019-11-29 Thread Laszlo Pinter (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-22280:
-
Attachment: HIVE-22280.08.patch

> Q tests for partitioned temporary tables
> 
>
> Key: HIVE-22280
> URL: https://issues.apache.org/jira/browse/HIVE-22280
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22280.01.patch, HIVE-22280.02.patch, 
> HIVE-22280.03.patch, HIVE-22280.04.patch, HIVE-22280.05.patch, 
> HIVE-22280.06.patch, HIVE-22280.07.patch, HIVE-22280.08.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22544:
--
Attachment: HIVE-22544.5.patch

> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch, 
> HIVE-22544.4.patch, HIVE-22544.5.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22544:
--
Status: Open  (was: Patch Available)

> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch, 
> HIVE-22544.4.patch, HIVE-22544.5.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22544:
--
Status: Patch Available  (was: Open)

> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch, 
> HIVE-22544.4.patch, HIVE-22544.5.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22552) Some q tests uses the same name for test tables

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22552:
--
Attachment: HIVE-22552.2.patch

> Some q tests uses the same name for test tables
> ---
>
> Key: HIVE-22552
> URL: https://issues.apache.org/jira/browse/HIVE-22552
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22552.1.patch, HIVE-22552.2.patch, 
> HIVE-22552.2.patch
>
>
> Some q tests uses the name "t_test" when creating a test table. This can 
> conflict when running the tests parallelly using the same metastore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22552) Some q tests uses the same name for test tables

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22552:
--
Status: Open  (was: Patch Available)

> Some q tests uses the same name for test tables
> ---
>
> Key: HIVE-22552
> URL: https://issues.apache.org/jira/browse/HIVE-22552
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22552.1.patch, HIVE-22552.2.patch, 
> HIVE-22552.2.patch
>
>
> Some q tests uses the name "t_test" when creating a test table. This can 
> conflict when running the tests parallelly using the same metastore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22552) Some q tests uses the same name for test tables

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22552:
--
Status: Patch Available  (was: Open)

> Some q tests uses the same name for test tables
> ---
>
> Key: HIVE-22552
> URL: https://issues.apache.org/jira/browse/HIVE-22552
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22552.1.patch, HIVE-22552.2.patch, 
> HIVE-22552.2.patch
>
>
> Some q tests uses the name "t_test" when creating a test table. This can 
> conflict when running the tests parallelly using the same metastore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22490) Adding jars with special characters in their path throws error

2019-11-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Szita updated HIVE-22490:
--
Description: 
HIVE-9664 introduced a change that uses URIs in SessionState to handle adding 
jars or other dependencies in a Hive session, but neglects to handle URIs as 
actual URIs, i.e. calling toString() on them.

This resulted in a regression as a path such as /tmp/blabla-[special].jar was 
working before HIVE-9664 and now it's throwing a URISyntaxException error.

I think it's fair to make the users provide an URL which is encoded 
({{blabla-%5Bspecial%5D.jar)}}, but then the issue of the current 
implementation will be the inability to find the file on FS, because Hive will 
look for it in {{blabla-%5Bspecial%5D.jar}} format, instead of 
blabla-[special].jar.

  was:
HIVE-9664 introduced a change that uses URIs in SessionState to handle adding 
jars or other dependencies in a Hive session, but forgot to add URL encoding.

This resulted a regression as path such as /tmp/blabla-[special].jar was 
working before HIVE-9664 and now it's throwing an error.


> Adding jars with special characters in their path throws error
> --
>
> Key: HIVE-22490
> URL: https://issues.apache.org/jira/browse/HIVE-22490
> Project: Hive
>  Issue Type: Bug
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22490.0.patch
>
>
> HIVE-9664 introduced a change that uses URIs in SessionState to handle adding 
> jars or other dependencies in a Hive session, but neglects to handle URIs as 
> actual URIs, i.e. calling toString() on them.
> This resulted in a regression as a path such as /tmp/blabla-[special].jar was 
> working before HIVE-9664 and now it's throwing a URISyntaxException error.
> I think it's fair to make the users provide an URL which is encoded 
> ({{blabla-%5Bspecial%5D.jar)}}, but then the issue of the current 
> implementation will be the inability to find the file on FS, because Hive 
> will look for it in {{blabla-%5Bspecial%5D.jar}} format, instead of 
> blabla-[special].jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22490) Adding jars with special characters in their path throws error

2019-11-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ádám Szita updated HIVE-22490:
--
Attachment: HIVE-22490.1.patch

> Adding jars with special characters in their path throws error
> --
>
> Key: HIVE-22490
> URL: https://issues.apache.org/jira/browse/HIVE-22490
> Project: Hive
>  Issue Type: Bug
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22490.0.patch, HIVE-22490.1.patch
>
>
> HIVE-9664 introduced a change that uses URIs in SessionState to handle adding 
> jars or other dependencies in a Hive session, but neglects to handle URIs as 
> actual URIs, i.e. calling toString() on them.
> This resulted in a regression as a path such as /tmp/blabla-[special].jar was 
> working before HIVE-9664 and now it's throwing a URISyntaxException error.
> I think it's fair to make the users provide an URL which is encoded 
> ({{blabla-%5Bspecial%5D.jar)}}, but then the issue of the current 
> implementation will be the inability to find the file on FS, because Hive 
> will look for it in {{blabla-%5Bspecial%5D.jar}} format, instead of 
> blabla-[special].jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22062) WriteId is not updated for a partitioned ACID table when schema changes

2019-11-29 Thread Gabor Kaszab (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Kaszab updated HIVE-22062:

Description: 
Changing the schema (e.g. adding a new column) of a non-partitioned ACID table 
results in the table-level writeId being incremented. This is as expected.

However, if you do the same on a partitioned ACID table then neither the 
table-level nor the partition-level writeIds are updated. I would expect in 
this case to increment the table-level writeId to reflect that the table has 
been changed.
Note, that get_valid_write_ids() shows that the high watermark is incremented 
even though the writeId isn't.

Update: I'd extend the scope of this Jira further a bit. There are a number of 
use cases in Hive that doesn't result in a writeId change on ACID tables and as 
a result there is no way from other systems (like Impala) to judge if a refresh 
should be run on a table or not. The only option is to every time update all 
the data for a table that is expensive. E.g. Additionally to the above use-case 
compaction is something that is not noticeable outside from Hive.

  was:
Changing the schema (e.g. adding a new column) of a non-partitioned ACID table 
results in the table-level writeId being incremented. This is as expected.

However, if you do the same on a partitioned ACID table then neither the 
table-level nor the partition-level writeIds are updated. I would expect in 
this case to increment the table-level writeId to reflect that the table has 
been changed.
Note, that get_valid_write_ids() shows that the high watermark is incremented 
even though the writeId isn't.


> WriteId is not updated for a partitioned ACID table when schema changes
> ---
>
> Key: HIVE-22062
> URL: https://issues.apache.org/jira/browse/HIVE-22062
> Project: Hive
>  Issue Type: Bug
>Reporter: Gabor Kaszab
>Assignee: Laszlo Kovari
>Priority: Major
>  Labels: ACID
>
> Changing the schema (e.g. adding a new column) of a non-partitioned ACID 
> table results in the table-level writeId being incremented. This is as 
> expected.
> However, if you do the same on a partitioned ACID table then neither the 
> table-level nor the partition-level writeIds are updated. I would expect in 
> this case to increment the table-level writeId to reflect that the table has 
> been changed.
> Note, that get_valid_write_ids() shows that the high watermark is incremented 
> even though the writeId isn't.
> Update: I'd extend the scope of this Jira further a bit. There are a number 
> of use cases in Hive that doesn't result in a writeId change on ACID tables 
> and as a result there is no way from other systems (like Impala) to judge if 
> a refresh should be run on a table or not. The only option is to every time 
> update all the data for a table that is expensive. E.g. Additionally to the 
> above use-case compaction is something that is not noticeable outside from 
> Hive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21050) Upgrade Parquet to 1.11.0 and use LogicalTypes

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985043#comment-16985043
 ] 

Hive QA commented on HIVE-21050:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
29s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} ql: The patch generated 0 new + 145 unchanged - 4 
fixed = 145 total (was 149) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} root: The patch generated 0 new + 145 unchanged - 4 
fixed = 145 total (was 149) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19675/dev-support/hive-personality.sh
 |
| git revision | master / b9bdbed |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19675/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade Parquet to 1.11.0 and use LogicalTypes
> --
>
> Key: HIVE-21050
> URL: https://issues.apache.org/jira/browse/HIVE-21050
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: Parquet, parquet
> Attachments: HIVE-21050.1.patch, HIVE-21050.1.patch, 
> HIVE-21050.1.patch, HIVE-21050.2.patch, HIVE-21050.3.patch, 
> HIVE-21050.4.patch, HIVE-21050.4.patch, HIVE-21050.4.patch, 
> HIVE-21050.5.patch, HIVE-21050.5.patch, HIVE-21050.5.patch, 
> HIVE-21050.6.patch, HIVE-21050.6.patch, HIVE-21050.6.patch.txt, 
> HIVE-21050.7.patch
>
>
> [WIP until Parquet community releases version 1.11.0]
> The new Parquet version (1.11.0) uses 
> [LogicalTypes|https://github.com/apache/parquet-format/blob/master/LogicalTypes.md]
>  instead of OriginalTy

[jira] [Commented] (HIVE-21050) Upgrade Parquet to 1.11.0 and use LogicalTypes

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985046#comment-16985046
 ] 

Hive QA commented on HIVE-21050:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987150/HIVE-21050.7.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 39 failed/errored test(s), 17801 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_stats] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_parquet_projection]
 (batchId=48)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_complex_types_vectorization]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_map_type_vectorization]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_struct_type_vectorization]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_types_vectorization]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_partitioned_date_time]
 (batchId=185)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_parquet]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_parquet_types]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning]
 (batchId=195)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_join] 
(batchId=123)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_0]
 (batchId=121)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_10]
 (batchId=124)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_11]
 (batchId=131)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_12]
 (batchId=125)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_13]
 (batchId=138)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_14]
 (batchId=131)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_15]
 (batchId=154)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_16]
 (batchId=152)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_17]
 (batchId=127)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_1]
 (batchId=118)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_2]
 (batchId=115)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_3]
 (batchId=150)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_4]
 (batchId=134)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_5]
 (batchId=146)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_6]
 (batchId=133)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_7]
 (batchId=153)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_8]
 (batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_9]
 (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_decimal_date]
 (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_div0]
 (batchId=150)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_limit]
 (batchId=125)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_offset_limit]
 (batchId=129)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_part_project]
 (batchId=130)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_pushdown]
 (batchId=129)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=135)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_parquet_projection]
 (batchId=134)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19675/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19675/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19675/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.a

[jira] [Updated] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-29 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21266:
-
Attachment: HIVE-21266.03.patch
Status: Patch Available  (was: Open)

> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21266.01.patch, HIVE-21266.02.patch, 
> HIVE-21266.02.patch, HIVE-21266.03.patch
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn in {{markCleaned()}}.  Now aborted data will come back as committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-29 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21266:
-
Status: Open  (was: Patch Available)

> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21266.01.patch, HIVE-21266.02.patch, 
> HIVE-21266.02.patch, HIVE-21266.03.patch
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn in {{markCleaned()}}.  Now aborted data will come back as committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22489) Reduce Sink operator should order nulls by parameter

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22489:
--
Status: Patch Available  (was: Open)

>  Reduce Sink operator should order nulls by parameter
> -
>
> Key: HIVE-22489
> URL: https://issues.apache.org/jira/browse/HIVE-22489
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-22489.1.patch, HIVE-22489.2.patch, 
> HIVE-22489.3.patch
>
>
> When the property hive.default.nulls.last is set to true and no null order is 
> explicitly specified in the ORDER BY clause of the query null ordering should 
> be NULLS LAST.
> But some of the Reduce Sink operators still orders null first.
> {code}
> SET hive.default.nulls.last=true;
> EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key = 
> src2.key) ORDER BY src1.key LIMIT 5;
> {code}
> {code}
> PREHOOK: query: EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key = 
> src2.key) ORDER BY src1.key
> PREHOOK: type: QUERY
> PREHOOK: Input: default@src
>  A masked pattern was here 
> POSTHOOK: query: EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key = 
> src2.key) ORDER BY src1.key
> POSTHOOK: type: QUERY
> POSTHOOK: Input: default@src
>  A masked pattern was here 
> OPTIMIZED SQL: SELECT `t0`.`key`, `t2`.`value`
> FROM (SELECT `key`
> FROM `default`.`src`
> WHERE `key` IS NOT NULL) AS `t0`
> INNER JOIN (SELECT `key`, `value`
> FROM `default`.`src`
> WHERE `key` IS NOT NULL) AS `t2` ON `t0`.`key` = `t2`.`key`
> ORDER BY `t0`.`key`
> STAGE DEPENDENCIES:
>   Stage-1 is a root stage
>   Stage-0 depends on stages: Stage-1
> STAGE PLANS:
>   Stage: Stage-1
> Tez
>  A masked pattern was here 
>   Edges:
> Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 4 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
>  A masked pattern was here 
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: src1
>   filterExpr: key is not null (type: boolean)
>   Statistics: Num rows: 500 Data size: 43500 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   GatherStats: false
>   Filter Operator
> isSamplingPred: false
> predicate: key is not null (type: boolean)
> Statistics: Num rows: 500 Data size: 43500 Basic stats: 
> COMPLETE Column stats: COMPLETE
> Select Operator
>   expressions: key (type: string)
>   outputColumnNames: _col0
>   Statistics: Num rows: 500 Data size: 43500 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   Reduce Output Operator
> key expressions: _col0 (type: string)
> null sort order: a
> sort order: +
> Map-reduce partition columns: _col0 (type: string)
> Statistics: Num rows: 500 Data size: 43500 Basic 
> stats: COMPLETE Column stats: COMPLETE
> tag: 0
> auto parallelism: true
> Execution mode: vectorized, llap
> LLAP IO: no inputs
> Path -> Alias:
>  A masked pattern was here 
> Path -> Partition:
>  A masked pattern was here 
> Partition
>   base file name: src
>   input format: org.apache.hadoop.mapred.TextInputFormat
>   output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>   properties:
> COLUMN_STATS_ACCURATE 
> {"BASIC_STATS":"true","COLUMN_STATS":{"key":"true","value":"true"}}
> bucket_count -1
> bucketing_version 2
> column.name.delimiter ,
> columns key,value
> columns.comments 'default','default'
> columns.types string:string
>  A masked pattern was here 
> name default.src
> numFiles 1
> numRows 500
> rawDataSize 5312
> serialization.ddl struct src { string key, string value}
> serialization.format 1
> serialization.lib 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> totalSize 5812
>  A masked pattern was here 
>   serde: org.apache.hadoop.hive.serde2.

[jira] [Updated] (HIVE-22489) Reduce Sink operator should order nulls by parameter

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22489:
--
Attachment: HIVE-22489.3.patch

>  Reduce Sink operator should order nulls by parameter
> -
>
> Key: HIVE-22489
> URL: https://issues.apache.org/jira/browse/HIVE-22489
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-22489.1.patch, HIVE-22489.2.patch, 
> HIVE-22489.3.patch
>
>
> When the property hive.default.nulls.last is set to true and no null order is 
> explicitly specified in the ORDER BY clause of the query null ordering should 
> be NULLS LAST.
> But some of the Reduce Sink operators still orders null first.
> {code}
> SET hive.default.nulls.last=true;
> EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key = 
> src2.key) ORDER BY src1.key LIMIT 5;
> {code}
> {code}
> PREHOOK: query: EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key = 
> src2.key) ORDER BY src1.key
> PREHOOK: type: QUERY
> PREHOOK: Input: default@src
>  A masked pattern was here 
> POSTHOOK: query: EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key = 
> src2.key) ORDER BY src1.key
> POSTHOOK: type: QUERY
> POSTHOOK: Input: default@src
>  A masked pattern was here 
> OPTIMIZED SQL: SELECT `t0`.`key`, `t2`.`value`
> FROM (SELECT `key`
> FROM `default`.`src`
> WHERE `key` IS NOT NULL) AS `t0`
> INNER JOIN (SELECT `key`, `value`
> FROM `default`.`src`
> WHERE `key` IS NOT NULL) AS `t2` ON `t0`.`key` = `t2`.`key`
> ORDER BY `t0`.`key`
> STAGE DEPENDENCIES:
>   Stage-1 is a root stage
>   Stage-0 depends on stages: Stage-1
> STAGE PLANS:
>   Stage: Stage-1
> Tez
>  A masked pattern was here 
>   Edges:
> Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 4 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
>  A masked pattern was here 
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: src1
>   filterExpr: key is not null (type: boolean)
>   Statistics: Num rows: 500 Data size: 43500 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   GatherStats: false
>   Filter Operator
> isSamplingPred: false
> predicate: key is not null (type: boolean)
> Statistics: Num rows: 500 Data size: 43500 Basic stats: 
> COMPLETE Column stats: COMPLETE
> Select Operator
>   expressions: key (type: string)
>   outputColumnNames: _col0
>   Statistics: Num rows: 500 Data size: 43500 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   Reduce Output Operator
> key expressions: _col0 (type: string)
> null sort order: a
> sort order: +
> Map-reduce partition columns: _col0 (type: string)
> Statistics: Num rows: 500 Data size: 43500 Basic 
> stats: COMPLETE Column stats: COMPLETE
> tag: 0
> auto parallelism: true
> Execution mode: vectorized, llap
> LLAP IO: no inputs
> Path -> Alias:
>  A masked pattern was here 
> Path -> Partition:
>  A masked pattern was here 
> Partition
>   base file name: src
>   input format: org.apache.hadoop.mapred.TextInputFormat
>   output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>   properties:
> COLUMN_STATS_ACCURATE 
> {"BASIC_STATS":"true","COLUMN_STATS":{"key":"true","value":"true"}}
> bucket_count -1
> bucketing_version 2
> column.name.delimiter ,
> columns key,value
> columns.comments 'default','default'
> columns.types string:string
>  A masked pattern was here 
> name default.src
> numFiles 1
> numRows 500
> rawDataSize 5312
> serialization.ddl struct src { string key, string value}
> serialization.format 1
> serialization.lib 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> totalSize 5812
>  A masked pattern was here 
>   serde: org.apache.hadoop.hive.serde2.lazy.L

[jira] [Updated] (HIVE-22489) Reduce Sink operator should order nulls by parameter

2019-11-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-22489:
--
Status: Open  (was: Patch Available)

>  Reduce Sink operator should order nulls by parameter
> -
>
> Key: HIVE-22489
> URL: https://issues.apache.org/jira/browse/HIVE-22489
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-22489.1.patch, HIVE-22489.2.patch, 
> HIVE-22489.3.patch
>
>
> When the property hive.default.nulls.last is set to true and no null order is 
> explicitly specified in the ORDER BY clause of the query null ordering should 
> be NULLS LAST.
> But some of the Reduce Sink operators still orders null first.
> {code}
> SET hive.default.nulls.last=true;
> EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key = 
> src2.key) ORDER BY src1.key LIMIT 5;
> {code}
> {code}
> PREHOOK: query: EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key = 
> src2.key) ORDER BY src1.key
> PREHOOK: type: QUERY
> PREHOOK: Input: default@src
>  A masked pattern was here 
> POSTHOOK: query: EXPLAIN EXTENDED
> SELECT src1.key, src2.value FROM src src1 JOIN src src2 ON (src1.key = 
> src2.key) ORDER BY src1.key
> POSTHOOK: type: QUERY
> POSTHOOK: Input: default@src
>  A masked pattern was here 
> OPTIMIZED SQL: SELECT `t0`.`key`, `t2`.`value`
> FROM (SELECT `key`
> FROM `default`.`src`
> WHERE `key` IS NOT NULL) AS `t0`
> INNER JOIN (SELECT `key`, `value`
> FROM `default`.`src`
> WHERE `key` IS NOT NULL) AS `t2` ON `t0`.`key` = `t2`.`key`
> ORDER BY `t0`.`key`
> STAGE DEPENDENCIES:
>   Stage-1 is a root stage
>   Stage-0 depends on stages: Stage-1
> STAGE PLANS:
>   Stage: Stage-1
> Tez
>  A masked pattern was here 
>   Edges:
> Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 4 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
>  A masked pattern was here 
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: src1
>   filterExpr: key is not null (type: boolean)
>   Statistics: Num rows: 500 Data size: 43500 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   GatherStats: false
>   Filter Operator
> isSamplingPred: false
> predicate: key is not null (type: boolean)
> Statistics: Num rows: 500 Data size: 43500 Basic stats: 
> COMPLETE Column stats: COMPLETE
> Select Operator
>   expressions: key (type: string)
>   outputColumnNames: _col0
>   Statistics: Num rows: 500 Data size: 43500 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   Reduce Output Operator
> key expressions: _col0 (type: string)
> null sort order: a
> sort order: +
> Map-reduce partition columns: _col0 (type: string)
> Statistics: Num rows: 500 Data size: 43500 Basic 
> stats: COMPLETE Column stats: COMPLETE
> tag: 0
> auto parallelism: true
> Execution mode: vectorized, llap
> LLAP IO: no inputs
> Path -> Alias:
>  A masked pattern was here 
> Path -> Partition:
>  A masked pattern was here 
> Partition
>   base file name: src
>   input format: org.apache.hadoop.mapred.TextInputFormat
>   output format: 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
>   properties:
> COLUMN_STATS_ACCURATE 
> {"BASIC_STATS":"true","COLUMN_STATS":{"key":"true","value":"true"}}
> bucket_count -1
> bucketing_version 2
> column.name.delimiter ,
> columns key,value
> columns.comments 'default','default'
> columns.types string:string
>  A masked pattern was here 
> name default.src
> numFiles 1
> numRows 500
> rawDataSize 5312
> serialization.ddl struct src { string key, string value}
> serialization.format 1
> serialization.lib 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> totalSize 5812
>  A masked pattern was here 
>   serde: org.apache.hadoop.hive.serde2.

[jira] [Commented] (HIVE-22551) BytesColumnVector initBuffer should clean vector and length consistently

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985058#comment-16985058
 ] 

Hive QA commented on HIVE-22551:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
24s{color} | {color:blue} storage-api in master has 58 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19676/dev-support/hive-personality.sh
 |
| git revision | master / b9bdbed |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: storage-api U: storage-api |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19676/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> BytesColumnVector initBuffer should clean vector and length consistently 
> -
>
> Key: HIVE-22551
> URL: https://issues.apache.org/jira/browse/HIVE-22551
> Project: Hive
>  Issue Type: Bug
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Attachments: HIVE-22551.01.patch, HIVE-22551.01.patch, 
> HIVE-22551.01.patch, HIVE-22551.01.patch
>
>
> VectorExtractRow relies on the fact that vector[i] and length[i] are 
> consistent within the BytesColumnVector, otherwise it throws exception:
> https://github.com/apache/hive/blob/edc53cc0d95e983c371a224943dd866210f0c65c/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExtractRow.java#L275
> There is a scenario when only vector[i] has been cleaned while reusing the 
> column vector, and then this kind of exception can be thrown:
> the reproduction was made with 
> [LlapDump|https://github.com/apache/hive/blob/master/llap-ext-client/src/java/org/apache/hadoop/hive/llap/LlapDump.java]
>  with String columns (longer than 16 chars)
> {code}
> 19/10/17 15:55:49 ERROR llap.LlapArrowRowRecordReader: Failed to fetch Arrow 
> batch
> java.lang.RuntimeException: STRING entry: batchIndex 45
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.BytesReadError(VectorExtractRow.java:488)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:294)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:193)
> at 
> org.apache.hadoop.hive.ql

[jira] [Commented] (HIVE-22562) Harmonize SessionState.getUserName

2019-11-29 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985068#comment-16985068
 ] 

Zoltan Haindrich commented on HIVE-22562:
-

thank you for the note [~pvary]! :)
I've just made a quick look; and it's interesting to see that 
[GenericUDFLoggedInUser.java contains a TODO about using the other one|
https://github.com/apache/hive/blob/b9bdbed48ce226b512c650657884f731efe80ce7/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLoggedInUser.java#L48];
 but of course I try to be carefull - I also hope that important use cases are 
covered by tests; so that I will be able to notice that I'm breaking everything.
I think impersonation works the other way around; meaning that the user running 
hive becomes someone else - but I will take a much closer look at the callsites 
which are only using getUserName().


> Harmonize SessionState.getUserName
> --
>
> Key: HIVE-22562
> URL: https://issues.apache.org/jira/browse/HIVE-22562
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> we might have 2 different user names at the same time:
> * 
> [getUserName()|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L1912]
> ** a method which relies on the userName field of the SessionState
> * 
> [getUserFromAuthenticator()|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L1291]
> ** a method which uses the authenticator to do the heavy lifting
> * there all kind of interesting call sites like:
> ** there are some which are [prefering the authenticator over 
> getUserName()|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L254]
> ** there are some which [use getUserName() regardless authenticator, but have 
> fixme|https://github.com/apache/hive/blob/ab71e5a22834b5fdd17d6e4ddb54bcd324ae97d7/ql/src/java/org/apache/hadoop/hive/ql/Driver.java#L1669]
> ** and there are some which are just using the authenticator with or without 
> notes/etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22490) Adding jars with special characters in their path throws error

2019-11-29 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985078#comment-16985078
 ] 

Peter Vary commented on HIVE-22490:
---

+1

> Adding jars with special characters in their path throws error
> --
>
> Key: HIVE-22490
> URL: https://issues.apache.org/jira/browse/HIVE-22490
> Project: Hive
>  Issue Type: Bug
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22490.0.patch, HIVE-22490.1.patch
>
>
> HIVE-9664 introduced a change that uses URIs in SessionState to handle adding 
> jars or other dependencies in a Hive session, but neglects to handle URIs as 
> actual URIs, i.e. calling toString() on them.
> This resulted in a regression as a path such as /tmp/blabla-[special].jar was 
> working before HIVE-9664 and now it's throwing a URISyntaxException error.
> I think it's fair to make the users provide an URL which is encoded 
> ({{blabla-%5Bspecial%5D.jar)}}, but then the issue of the current 
> implementation will be the inability to find the file on FS, because Hive 
> will look for it in {{blabla-%5Bspecial%5D.jar}} format, instead of 
> blabla-[special].jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-22490) Adding jars with special characters in their path throws error

2019-11-29 Thread Peter Vary (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985078#comment-16985078
 ] 

Peter Vary edited comment on HIVE-22490 at 11/29/19 3:26 PM:
-

+1 pending tests


was (Author: pvary):
+1

> Adding jars with special characters in their path throws error
> --
>
> Key: HIVE-22490
> URL: https://issues.apache.org/jira/browse/HIVE-22490
> Project: Hive
>  Issue Type: Bug
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22490.0.patch, HIVE-22490.1.patch
>
>
> HIVE-9664 introduced a change that uses URIs in SessionState to handle adding 
> jars or other dependencies in a Hive session, but neglects to handle URIs as 
> actual URIs, i.e. calling toString() on them.
> This resulted in a regression as a path such as /tmp/blabla-[special].jar was 
> working before HIVE-9664 and now it's throwing a URISyntaxException error.
> I think it's fair to make the users provide an URL which is encoded 
> ({{blabla-%5Bspecial%5D.jar)}}, but then the issue of the current 
> implementation will be the inability to find the file on FS, because Hive 
> will look for it in {{blabla-%5Bspecial%5D.jar}} format, instead of 
> blabla-[special].jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22551) BytesColumnVector initBuffer should clean vector and length consistently

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985097#comment-16985097
 ] 

Hive QA commented on HIVE-22551:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987153/HIVE-22551.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17751 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestPartitionManagement.testPartitionDiscoveryTransactionalTable
 (batchId=225)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19676/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19676/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19676/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987153 - PreCommit-HIVE-Build

> BytesColumnVector initBuffer should clean vector and length consistently 
> -
>
> Key: HIVE-22551
> URL: https://issues.apache.org/jira/browse/HIVE-22551
> Project: Hive
>  Issue Type: Bug
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Attachments: HIVE-22551.01.patch, HIVE-22551.01.patch, 
> HIVE-22551.01.patch, HIVE-22551.01.patch
>
>
> VectorExtractRow relies on the fact that vector[i] and length[i] are 
> consistent within the BytesColumnVector, otherwise it throws exception:
> https://github.com/apache/hive/blob/edc53cc0d95e983c371a224943dd866210f0c65c/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExtractRow.java#L275
> There is a scenario when only vector[i] has been cleaned while reusing the 
> column vector, and then this kind of exception can be thrown:
> the reproduction was made with 
> [LlapDump|https://github.com/apache/hive/blob/master/llap-ext-client/src/java/org/apache/hadoop/hive/llap/LlapDump.java]
>  with String columns (longer than 16 chars)
> {code}
> 19/10/17 15:55:49 ERROR llap.LlapArrowRowRecordReader: Failed to fetch Arrow 
> batch
> java.lang.RuntimeException: STRING entry: batchIndex 45
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.BytesReadError(VectorExtractRow.java:488)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:294)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:193)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRow(VectorExtractRow.java:483)
> at 
> org.apache.hadoop.hive.ql.io.arrow.Deserializer.deserialize(Deserializer.java:125)
> at 
> org.apache.hadoop.hive.ql.io.arrow.ArrowColumnarBatchSerDe.deserialize(ArrowColumnarBatchSerDe.java:284)
> at 
> org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:75)
> at 
> org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41)
> at datareader.LlapDump.main(LlapDump.java:124)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22565) Make calling alter_table unnecessary during inserts into ACID tables

2019-11-29 Thread Csaba Ringhofer (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Ringhofer updated HIVE-22565:
---
Summary: Make calling alter_table unnecessary during  inserts into ACID 
tables  (was: Make calling alter_table unnecessary during inserts)

> Make calling alter_table unnecessary during  inserts into ACID tables
> -
>
> Key: HIVE-22565
> URL: https://issues.apache.org/jira/browse/HIVE-22565
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Csaba Ringhofer
>Priority: Critical
>  Labels: ACID
>
> tl dr: it would be good to set the table's writeId during commit to make the 
> extra alter_table call unnecessary
> This came up during the implementation of (insert_only) ACID inserts in 
> Apache Impala.
> The following description deals with the non-partitioned case, partitioned 
> tables are a bit more complicated.
> apply_table is called by Impala during inserts  mainly to set stats to 
> non-accurate:
> - the table's writeId is set to the writeId of the insert
> - remove table property column_stats_accurate
> In the past we had the false assumption that setting the writeId is done 
> automatically by committing the transaction. It would be nice to have a 
> version of commit that actually does this - commits the transaction + changes 
> the writeId/marks stats as inaccurate in a single atomic step.
> The current state of alter_table + commit being non-atomic can lead to weird 
> scenarios in parallel inserts(+ computes stats).
> Impala calls apply_table before commit, so the calls to HMS during inserts 
> look like this:
> 1. open  new transaction
> 2. get shared lock on the table
> 3. get write id
> ... write the files ...
> 4. call alter_table to remove column_stats_accurate (this also sets writeId)
> 5. commit the transaction
> So the following can occur with two parallel writes + a compute stats: 
> 1. txn 1 calls alter_table (sets to writeId of txn 1)
> 2. txn 2 calls alter_table (sets to writeId of txn 2)
> 3. txn 2 is committed
> 4. compute stats runs (gets validWriteList, reads the table, sets the stats 
> with alter_table)
> 5. txn 1 is committed
> The compute stats will have the writeId of txn 2 in it's validWriteId list, 
> so it will assume that it computed accurate stats. After step 5. the stats 
> will be considered accurate while they do not contain the new rows from txn 1.
> Another issue with frequent alter_table calls is that the effect of actual 
> ALTER TABLE commands that use shared locks (I think SET TBLPROPERTIES does 
> this in Hive) can be simply overwritten by alter_table calls from inserts 
> that used a different cached version of the table. This is generally a 
> problem if ALTER TABLE is called from different clients (without taking 
> exclusive lock), but doing parallel DMLs is probably more common than doing 
> parallel DDLs.
> So issues can occur even if clients use the API correctly - another problem 
> is that the hard to use API may lead to buggy client implementation that can 
> easily mess up things for other components too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22062) WriteId is not updated for a partitioned ACID table when schema changes

2019-11-29 Thread Csaba Ringhofer (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985113#comment-16985113
 ] 

Csaba Ringhofer commented on HIVE-22062:


Linked HIVE-22565 as it also deals with changing table level writeId.

> WriteId is not updated for a partitioned ACID table when schema changes
> ---
>
> Key: HIVE-22062
> URL: https://issues.apache.org/jira/browse/HIVE-22062
> Project: Hive
>  Issue Type: Bug
>Reporter: Gabor Kaszab
>Assignee: Laszlo Kovari
>Priority: Major
>  Labels: ACID
>
> Changing the schema (e.g. adding a new column) of a non-partitioned ACID 
> table results in the table-level writeId being incremented. This is as 
> expected.
> However, if you do the same on a partitioned ACID table then neither the 
> table-level nor the partition-level writeIds are updated. I would expect in 
> this case to increment the table-level writeId to reflect that the table has 
> been changed.
> Note, that get_valid_write_ids() shows that the high watermark is incremented 
> even though the writeId isn't.
> Update: I'd extend the scope of this Jira further a bit. There are a number 
> of use cases in Hive that doesn't result in a writeId change on ACID tables 
> and as a result there is no way from other systems (like Impala) to judge if 
> a refresh should be run on a table or not. The only option is to every time 
> update all the data for a table that is expensive. E.g. Additionally to the 
> above use-case compaction is something that is not noticeable outside from 
> Hive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21954) QTest: support for running qtests on various metastore DBs

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985128#comment-16985128
 ] 

Hive QA commented on HIVE-21954:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987152/HIVE-21954.10.patch

{color:green}SUCCESS:{color} +1 due to 12 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17752 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19677/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19677/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19677/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987152 - PreCommit-HIVE-Build

> QTest: support for running qtests on various metastore DBs
> --
>
> Key: HIVE-21954
> URL: https://issues.apache.org/jira/browse/HIVE-21954
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore, Testing Infrastructure
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21954.01.patch, HIVE-21954.02.patch, 
> HIVE-21954.03.patch, HIVE-21954.03.patch, HIVE-21954.03.patch, 
> HIVE-21954.04.patch, HIVE-21954.05.patch, HIVE-21954.07.patch, 
> HIVE-21954.07.patch, HIVE-21954.08.patch, HIVE-21954.09.patch, 
> HIVE-21954.10.patch, HIVE-21954.10.patch
>
>
> In HIVE-21940, a postgres metastore related issue has been fixed, and a local 
> reproduction has been provided.
> {code}
> export QTEST_LEAVE_FILES=true
> docker kill metastore-test-postgres-install
> docker rm metastore-test-postgres-install
> cd standalone-metastore
> mvn verify -DskipITests=false -Dit.test=ITestPostgres#install -Dtest=nosuch 
> -Dmetastore.itest.no.stop.container=true
> cd ..
> mvn test -Dtest.output.overwrite=true -Pitests,hadoop-2 -pl itests/qtest 
> -Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
> -Dhive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
> {code}
> The problem with this solution is that data/conf/hive-site.xml has to be 
> edited manually. My proposal is to introduce a property 
> (-Dmetastore.db=postgres), which can take care of the parameters on the fly. 
> 2 supported solutions could be:
> 1. simple parameters: -Dmetastore.db=postgres
> In this case, tests depend on settings from ITestPostgres class (password, 
> db, etc.)
> 2. verbose but flexible parameters: [see hive-site.xml HIVE-21940's repro 
> patch|https://issues.apache.org/jira/secure/attachment/12973534/HIVE-21940.repro.patch]
>  
> In the first implementation, I would not start metastore db automatically 
> (which is done be 'mvn verify ...'), but it's still under planning. 
> In the long term, we should consider running this kind of tests in precommit 
> phase, so maybe -Dmetastore.db=postgres could start metastore db 
> automatically. Also we should consider running some qtests on various 
> metastores. I would not pick randomly, but choose some "metastore-heavy" ones 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21954) QTest: support for running qtests on various metastore DBs

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985133#comment-16985133
 ] 

Hive QA commented on HIVE-21954:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
31s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
15s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
18s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
54s{color} | {color:blue} itests/util in master has 53 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 11m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} standalone-metastore: The patch generated 1 new + 104 
unchanged - 6 fixed = 105 total (was 110) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 1 new + 104 unchanged - 6 fixed = 105 total (was 110) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
19s{color} | {color:red} root: The patch generated 1 new + 800 unchanged - 6 
fixed = 801 total (was 806) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} patch/standalone-metastore/metastore-server cannot run 
setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  5m 
43s{color} | {color:red} patch/ql cannot run setBugDatabaseInfo from findbugs 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
28s{color} | {color:red} patch/itests/hive-unit cannot run setBugDatabaseInfo 
from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} patch/itests/util cannot run setBugDatabaseInfo from 
findbugs {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 11m 
13s{color} | {color:red} root generated 9 new + 337 unchanged - 0 fixed = 346 
total (was 337) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} itests_util generated 9 new + 9 unchanged - 0 fixed = 
18 total (was 9) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| unam

[jira] [Commented] (HIVE-22554) ACID: Wait timeout for blocking compaction should be configurable

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985139#comment-16985139
 ] 

Hive QA commented on HIVE-22554:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
34s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19678/dev-support/hive-personality.sh
 |
| git revision | master / b9bdbed |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19678/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ACID: Wait timeout for blocking compaction should be configurable
> -
>
> Key: HIVE-22554
> URL: https://issues.apache.org/jira/browse/HIVE-22554
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Minor
> Attachments: HIVE-22554.01.patch, HIVE-22554.02.patch
>
>
> The wait timeout for blocking compaction is hardcoded to 5 minutes. 
> {code:java}
> public class AlterTableCompactOperation extends 
> DDLOperation {
>   private static final int FIVE_MINUTES_IN_MILLIES = 5*60*1000;
> ...
> }{code}
> This should be configurable via a Hive Configuration parameter. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22554) ACID: Wait timeout for blocking compaction should be configurable

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985153#comment-16985153
 ] 

Hive QA commented on HIVE-22554:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987155/HIVE-22554.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17751 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19678/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19678/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19678/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987155 - PreCommit-HIVE-Build

> ACID: Wait timeout for blocking compaction should be configurable
> -
>
> Key: HIVE-22554
> URL: https://issues.apache.org/jira/browse/HIVE-22554
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Minor
> Attachments: HIVE-22554.01.patch, HIVE-22554.02.patch
>
>
> The wait timeout for blocking compaction is hardcoded to 5 minutes. 
> {code:java}
> public class AlterTableCompactOperation extends 
> DDLOperation {
>   private static final int FIVE_MINUTES_IN_MILLIES = 5*60*1000;
> ...
> }{code}
> This should be configurable via a Hive Configuration parameter. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-19358) CBO decorrelation logic should generate Hive operators

2019-11-29 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-19358:
--

Assignee: Jesus Camacho Rodriguez

> CBO decorrelation logic should generate Hive operators
> --
>
> Key: HIVE-19358
> URL: https://issues.apache.org/jira/browse/HIVE-19358
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19358.01.patch, HIVE-19358.02.patch, 
> HIVE-19358.03.patch, HIVE-19358.04.patch, HIVE-19358.05.patch, 
> HIVE-19358.patch, fix.patch
>
>
> Decorrelation logic may generate logical instances of the operators in the 
> plan (e.g., LogicalFilter instead of HiveFilter). This leads to errors while 
> costing the tree in the Volcano planner (used in MV rewriting), since logical 
> operators do not have a cost associated to them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22327) Repl: Ignore read-only transactions in notification log

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985185#comment-16985185
 ] 

Hive QA commented on HIVE-22327:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
13s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
25s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
27s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 4 new + 563 unchanged - 3 fixed = 567 total (was 566) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 1 new + 33 unchanged - 0 fixed 
= 34 total (was 33) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch server-extensions passed checkstyle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} itests/hcatalog-unit: The patch generated 0 new + 17 
unchanged - 1 fixed = 17 total (was 18) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} standalone-metastore/metastore-server generated 0 
new + 178 unchanged - 1 fixed = 178 total (was 179) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
35s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} server-extensions in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19679/dev-support/hive-personality.sh
 |
| git revision | master / b9bdbed |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19679/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19679/yetus/diff-checkst

[jira] [Commented] (HIVE-22327) Repl: Ignore read-only transactions in notification log

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985189#comment-16985189
 ] 

Hive QA commented on HIVE-22327:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987156/HIVE-22327.15.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17756 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19679/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19679/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19679/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987156 - PreCommit-HIVE-Build

> Repl: Ignore read-only transactions in notification log
> ---
>
> Key: HIVE-22327
> URL: https://issues.apache.org/jira/browse/HIVE-22327
> Project: Hive
>  Issue Type: Improvement
>  Components: repl
>Reporter: Gopal Vijayaraghavan
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-22327.1.patch, HIVE-22327.10.patch, 
> HIVE-22327.11.patch, HIVE-22327.12.patch, HIVE-22327.13.patch, 
> HIVE-22327.14.patch, HIVE-22327.15.patch, HIVE-22327.2.patch, 
> HIVE-22327.3.patch, HIVE-22327.4.patch, HIVE-22327.5.patch, 
> HIVE-22327.6.patch, HIVE-22327.7.patch, HIVE-22327.8.patch, HIVE-22327.9.patch
>
>
> Read txns need not be replicated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22536) Improve return path enabling/disabling

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985193#comment-16985193
 ] 

Hive QA commented on HIVE-22536:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
10s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} common: The patch generated 1 new + 367 unchanged - 1 
fixed = 368 total (was 368) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
50s{color} | {color:red} ql: The patch generated 2 new + 775 unchanged - 2 
fixed = 777 total (was 777) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19680/dev-support/hive-personality.sh
 |
| git revision | master / b9bdbed |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19680/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19680/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19680/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improve return path enabling/disabling
> --
>
> Key: HIVE-22536
> URL: https://issues.apache.org/jira/browse/HIVE-22536
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22536.01.patch, HIVE-22536.02.patch, 
> HIVE-22536.03.patch
>
>
> Instead of having a boolean for hive.cbo.returnpath.hiveop it should be 
> on/off/supported. In 

[jira] [Updated] (HIVE-22551) BytesColumnVector initBuffer should clean vector and length consistently

2019-11-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-22551:

Attachment: HIVE-22551.01.patch

> BytesColumnVector initBuffer should clean vector and length consistently 
> -
>
> Key: HIVE-22551
> URL: https://issues.apache.org/jira/browse/HIVE-22551
> Project: Hive
>  Issue Type: Bug
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Attachments: HIVE-22551.01.patch, HIVE-22551.01.patch, 
> HIVE-22551.01.patch, HIVE-22551.01.patch, HIVE-22551.01.patch
>
>
> VectorExtractRow relies on the fact that vector[i] and length[i] are 
> consistent within the BytesColumnVector, otherwise it throws exception:
> https://github.com/apache/hive/blob/edc53cc0d95e983c371a224943dd866210f0c65c/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorExtractRow.java#L275
> There is a scenario when only vector[i] has been cleaned while reusing the 
> column vector, and then this kind of exception can be thrown:
> the reproduction was made with 
> [LlapDump|https://github.com/apache/hive/blob/master/llap-ext-client/src/java/org/apache/hadoop/hive/llap/LlapDump.java]
>  with String columns (longer than 16 chars)
> {code}
> 19/10/17 15:55:49 ERROR llap.LlapArrowRowRecordReader: Failed to fetch Arrow 
> batch
> java.lang.RuntimeException: STRING entry: batchIndex 45
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.BytesReadError(VectorExtractRow.java:488)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:294)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:193)
> at 
> org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRow(VectorExtractRow.java:483)
> at 
> org.apache.hadoop.hive.ql.io.arrow.Deserializer.deserialize(Deserializer.java:125)
> at 
> org.apache.hadoop.hive.ql.io.arrow.ArrowColumnarBatchSerDe.deserialize(ArrowColumnarBatchSerDe.java:284)
> at 
> org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:75)
> at 
> org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41)
> at datareader.LlapDump.main(LlapDump.java:124)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21954) QTest: support for running qtests on various metastore DBs

2019-11-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-21954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-21954:

Description: 
In HIVE-21940, a postgres metastore related issue has been fixed, and a local 
reproduction has been provided.

{code}
export QTEST_LEAVE_FILES=true
docker kill metastore-test-postgres-install
docker rm metastore-test-postgres-install
cd standalone-metastore
mvn verify -DskipITests=false -Dit.test=ITestPostgres#install -Dtest=nosuch 
-Dmetastore.itest.no.stop.container=true
cd ..
mvn test -Dtest.output.overwrite=true -Pitests,hadoop-2 -pl itests/qtest 
-Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
-Dhive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
{code}

The problem with this solution is that data/conf/hive-site.xml has to be edited 
manually. My proposal is to introduce a property (-Dmetastore.db=postgres), 
which can take care of the parameters on the fly. 2 supported solutions could 
be:
1. simple parameters: -Dmetastore.db=postgres
In this case, tests depend on settings from ITestPostgres class (password, db, 
etc.)
2. verbose but flexible parameters: [see hive-site.xml HIVE-21940's repro 
patch|https://issues.apache.org/jira/secure/attachment/12973534/HIVE-21940.repro.patch]
 
 
In the long term, we should consider running this kind of tests in precommit 
phase, so maybe -Dmetastore.db=postgres could start metastore db automatically. 
Also we should consider running some qtests on various metastores. I would not 
pick randomly, but choose some "metastore-heavy" ones instead.

  was:
In HIVE-21940, a postgres metastore related issue has been fixed, and a local 
reproduction has been provided.

{code}
export QTEST_LEAVE_FILES=true
docker kill metastore-test-postgres-install
docker rm metastore-test-postgres-install
cd standalone-metastore
mvn verify -DskipITests=false -Dit.test=ITestPostgres#install -Dtest=nosuch 
-Dmetastore.itest.no.stop.container=true
cd ..
mvn test -Dtest.output.overwrite=true -Pitests,hadoop-2 -pl itests/qtest 
-Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
-Dhive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
{code}

The problem with this solution is that data/conf/hive-site.xml has to be edited 
manually. My proposal is to introduce a property (-Dmetastore.db=postgres), 
which can take care of the parameters on the fly. 2 supported solutions could 
be:
1. simple parameters: -Dmetastore.db=postgres
In this case, tests depend on settings from ITestPostgres class (password, db, 
etc.)
2. verbose but flexible parameters: [see hive-site.xml HIVE-21940's repro 
patch|https://issues.apache.org/jira/secure/attachment/12973534/HIVE-21940.repro.patch]
 

In the first implementation, I would not start metastore db automatically 
(which is done be 'mvn verify ...'), but it's still under planning. 
In the long term, we should consider running this kind of tests in precommit 
phase, so maybe -Dmetastore.db=postgres could start metastore db automatically. 
Also we should consider running some qtests on various metastores. I would not 
pick randomly, but choose some "metastore-heavy" ones instead.


> QTest: support for running qtests on various metastore DBs
> --
>
> Key: HIVE-21954
> URL: https://issues.apache.org/jira/browse/HIVE-21954
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore, Testing Infrastructure
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21954.01.patch, HIVE-21954.02.patch, 
> HIVE-21954.03.patch, HIVE-21954.03.patch, HIVE-21954.03.patch, 
> HIVE-21954.04.patch, HIVE-21954.05.patch, HIVE-21954.07.patch, 
> HIVE-21954.07.patch, HIVE-21954.08.patch, HIVE-21954.09.patch, 
> HIVE-21954.10.patch, HIVE-21954.10.patch
>
>
> In HIVE-21940, a postgres metastore related issue has been fixed, and a local 
> reproduction has been provided.
> {code}
> export QTEST_LEAVE_FILES=true
> docker kill metastore-test-postgres-install
> docker rm metastore-test-postgres-install
> cd standalone-metastore
> mvn verify -DskipITests=false -Dit.test=ITestPostgres#install -Dtest=nosuch 
> -Dmetastore.itest.no.stop.container=true
> cd ..
> mvn test -Dtest.output.overwrite=true -Pitests,hadoop-2 -pl itests/qtest 
> -Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
> -Dhive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
> {code}
> The problem with this solution is that data/conf/hive-site.xml has to be 
> edited manually. My proposal is to introduce a property 
> (-Dmetastore.db=postgres), which can take care of the parameters on the fly. 
> 2 supported solutions could be:
> 1. simple parameters: -Dmetastore.db=postgres
> In this case, tests dep

[jira] [Commented] (HIVE-21954) QTest: support for running qtests on various metastore DBs

2019-11-29 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-21954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985196#comment-16985196
 ] 

László Bodor commented on HIVE-21954:
-

pushed to master, thanks [~kgyrtkirk] for the review!

a couple of examples using the new feature:
{code}
mvn test -Dtest.output.overwrite=true -Pitests -pl itests/qtest 
-Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
-Dtest.metastore.db=mssql
mvn test -Dtest.output.overwrite=true -Pitests -pl itests/qtest 
-Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
-Dtest.metastore.db=mysql
mvn test -Dtest.output.overwrite=true -Pitests -pl itests/qtest 
-Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
-Dtest.metastore.db=postgres
mvn test -Dtest.output.overwrite=true -Pitests -pl itests/qtest 
-Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
-Dtest.metastore.db=oracle 
-Ditest.jdbc.jars=/path/to/your/god/damn/oracle/jdbc/driver/ojdbc6.jar  
{code}

the data can be checked in metastore if you prevent it from cleaning up:
{code}
export QTEST_LEAVE_FILES=true
mvn ... -Dmetastore.itest.no.stop.container=true
{code}

remove all metastore docker containers (e.g. for making sure that you have a 
clean environment for running new qtests with the feature)
{code}
docker ps -a -q --filter="name=metastore-test-.*-install" | xargs docker rm -f
{code}


> QTest: support for running qtests on various metastore DBs
> --
>
> Key: HIVE-21954
> URL: https://issues.apache.org/jira/browse/HIVE-21954
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore, Testing Infrastructure
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21954.01.patch, HIVE-21954.02.patch, 
> HIVE-21954.03.patch, HIVE-21954.03.patch, HIVE-21954.03.patch, 
> HIVE-21954.04.patch, HIVE-21954.05.patch, HIVE-21954.07.patch, 
> HIVE-21954.07.patch, HIVE-21954.08.patch, HIVE-21954.09.patch, 
> HIVE-21954.10.patch, HIVE-21954.10.patch
>
>
> In HIVE-21940, a postgres metastore related issue has been fixed, and a local 
> reproduction has been provided.
> {code}
> export QTEST_LEAVE_FILES=true
> docker kill metastore-test-postgres-install
> docker rm metastore-test-postgres-install
> cd standalone-metastore
> mvn verify -DskipITests=false -Dit.test=ITestPostgres#install -Dtest=nosuch 
> -Dmetastore.itest.no.stop.container=true
> cd ..
> mvn test -Dtest.output.overwrite=true -Pitests,hadoop-2 -pl itests/qtest 
> -Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
> -Dhive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
> {code}
> The problem with this solution is that data/conf/hive-site.xml has to be 
> edited manually. My proposal is to introduce a property 
> (-Dmetastore.db=postgres), which can take care of the parameters on the fly. 
> 2 supported solutions could be:
> 1. simple parameters: -Dmetastore.db=postgres
> In this case, tests depend on settings from ITestPostgres class (password, 
> db, etc.)
> 2. verbose but flexible parameters: [see hive-site.xml HIVE-21940's repro 
> patch|https://issues.apache.org/jira/secure/attachment/12973534/HIVE-21940.repro.patch]
>  
>  
> In the long term, we should consider running this kind of tests in precommit 
> phase, so maybe -Dmetastore.db=postgres could start metastore db 
> automatically. Also we should consider running some qtests on various 
> metastores. I would not pick randomly, but choose some "metastore-heavy" ones 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21954) QTest: support for running qtests on various metastore DBs

2019-11-29 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-21954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-21954:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> QTest: support for running qtests on various metastore DBs
> --
>
> Key: HIVE-21954
> URL: https://issues.apache.org/jira/browse/HIVE-21954
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore, Testing Infrastructure
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21954.01.patch, HIVE-21954.02.patch, 
> HIVE-21954.03.patch, HIVE-21954.03.patch, HIVE-21954.03.patch, 
> HIVE-21954.04.patch, HIVE-21954.05.patch, HIVE-21954.07.patch, 
> HIVE-21954.07.patch, HIVE-21954.08.patch, HIVE-21954.09.patch, 
> HIVE-21954.10.patch, HIVE-21954.10.patch
>
>
> In HIVE-21940, a postgres metastore related issue has been fixed, and a local 
> reproduction has been provided.
> {code}
> export QTEST_LEAVE_FILES=true
> docker kill metastore-test-postgres-install
> docker rm metastore-test-postgres-install
> cd standalone-metastore
> mvn verify -DskipITests=false -Dit.test=ITestPostgres#install -Dtest=nosuch 
> -Dmetastore.itest.no.stop.container=true
> cd ..
> mvn test -Dtest.output.overwrite=true -Pitests,hadoop-2 -pl itests/qtest 
> -Dtest=TestCliDriver -Dqfile=partition_params_postgres.q 
> -Dhive.metastore.rawstore.impl=org.apache.hadoop.hive.metastore.ObjectStore
> {code}
> The problem with this solution is that data/conf/hive-site.xml has to be 
> edited manually. My proposal is to introduce a property 
> (-Dmetastore.db=postgres), which can take care of the parameters on the fly. 
> 2 supported solutions could be:
> 1. simple parameters: -Dmetastore.db=postgres
> In this case, tests depend on settings from ITestPostgres class (password, 
> db, etc.)
> 2. verbose but flexible parameters: [see hive-site.xml HIVE-21940's repro 
> patch|https://issues.apache.org/jira/secure/attachment/12973534/HIVE-21940.repro.patch]
>  
>  
> In the long term, we should consider running this kind of tests in precommit 
> phase, so maybe -Dmetastore.db=postgres could start metastore db 
> automatically. Also we should consider running some qtests on various 
> metastores. I would not pick randomly, but choose some "metastore-heavy" ones 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22526) Extract Compiler from Driver

2019-11-29 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22526:
--
Attachment: HIVE-22526.06.patch

> Extract Compiler from Driver
> 
>
> Key: HIVE-22526
> URL: https://issues.apache.org/jira/browse/HIVE-22526
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22526.01.patch, HIVE-22526.02.patch, 
> HIVE-22526.03.patch, HIVE-22526.04.patch, HIVE-22526.05.patch, 
> HIVE-22526.06.patch
>
>
> The Driver class contains ~600 lines of code responsible for compiling the 
> command. That means that from the command String a Plan needs to be created, 
> and also a transaction needs to be started (in most of the cases). This is a 
> thing done by the compile function, which has a lot of sub functions to help 
> this task, while itself is also really big. All these codes should be put 
> into a separate class, where it can do it's job without getting mixed with 
> the other codes in the Driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22536) Improve return path enabling/disabling

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985198#comment-16985198
 ] 

Hive QA commented on HIVE-22536:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987157/HIVE-22536.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17751 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19680/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19680/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19680/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987157 - PreCommit-HIVE-Build

> Improve return path enabling/disabling
> --
>
> Key: HIVE-22536
> URL: https://issues.apache.org/jira/browse/HIVE-22536
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22536.01.patch, HIVE-22536.02.patch, 
> HIVE-22536.03.patch
>
>
> Instead of having a boolean for hive.cbo.returnpath.hiveop it should be 
> on/off/supported. In case of "supported" it should be used for a subset of 
> commands which are already verified to be able to work with return path. This 
> is a temporary solution for the time while we are developing return path, 
> before making it the only way to handle commands.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22557) Break up DDLSemanticAnalyzer - extract Table constraints analyzers

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985209#comment-16985209
 ] 

Hive QA commented on HIVE-22557:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
26s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 1 new + 177 unchanged - 11 
fixed = 178 total (was 188) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19681/dev-support/hive-personality.sh
 |
| git revision | master / d645d82 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19681/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19681/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Break up DDLSemanticAnalyzer - extract Table constraints analyzers
> --
>
> Key: HIVE-22557
> URL: https://issues.apache.org/jira/browse/HIVE-22557
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22557.01.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #10: extract the table constraints related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22557) Break up DDLSemanticAnalyzer - extract Table constraints analyzers

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985214#comment-16985214
 ] 

Hive QA commented on HIVE-22557:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987159/HIVE-22557.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17752 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19681/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19681/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19681/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987159 - PreCommit-HIVE-Build

> Break up DDLSemanticAnalyzer - extract Table constraints analyzers
> --
>
> Key: HIVE-22557
> URL: https://issues.apache.org/jira/browse/HIVE-22557
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22557.01.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #10: extract the table constraints related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-20150) TopNKey pushdown

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985219#comment-16985219
 ] 

Hive QA commented on HIVE-20150:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
16s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} ql: The patch generated 0 new + 37 unchanged - 1 
fixed = 37 total (was 38) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19682/dev-support/hive-personality.sh
 |
| git revision | master / d645d82 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19682/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> TopNKey pushdown
> 
>
> Key: HIVE-20150
> URL: https://issues.apache.org/jira/browse/HIVE-20150
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Affects Versions: 4.0.0
>Reporter: Teddy Choi
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20150.1.patch, HIVE-20150.10.patch, 
> HIVE-20150.11.patch, HIVE-20150.11.patch, HIVE-20150.14.patch, 
> HIVE-20150.15.patch, HIVE-20150.16.patch, HIVE-20150.17.patch, 
> HIVE-20150.17.patch, HIVE-20150.18.patch, HIVE-20150.2.patch, 
> HIVE-20150.4.patch, HIVE-20150.5.patch, HIVE-20150.6.patch, 
> HIVE-20150.7.patch, HIVE-20150.8.patch, HIVE-20150.9.patch
>
>
> TopNKey operator is implemented in HIVE-17896, but it needs more work in 
> pushdown implementation. So this issue covers TopNKey pushdown implementation 
> with proper tests.



--
This message was sent by Atlassian J

[jira] [Commented] (HIVE-20150) TopNKey pushdown

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985224#comment-16985224
 ] 

Hive QA commented on HIVE-20150:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987164/HIVE-20150.18.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17754 tests 
executed
*Failed tests:*
{noformat}
TestStatsReplicationScenariosACIDNoAutogather - did not produce a TEST-*.xml 
file (likely timed out) (batchId=257)
org.apache.hive.spark.client.rpc.TestRpc.testClientTimeout (batchId=359)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19682/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19682/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19682/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987164 - PreCommit-HIVE-Build

> TopNKey pushdown
> 
>
> Key: HIVE-20150
> URL: https://issues.apache.org/jira/browse/HIVE-20150
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Affects Versions: 4.0.0
>Reporter: Teddy Choi
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-20150.1.patch, HIVE-20150.10.patch, 
> HIVE-20150.11.patch, HIVE-20150.11.patch, HIVE-20150.14.patch, 
> HIVE-20150.15.patch, HIVE-20150.16.patch, HIVE-20150.17.patch, 
> HIVE-20150.17.patch, HIVE-20150.18.patch, HIVE-20150.2.patch, 
> HIVE-20150.4.patch, HIVE-20150.5.patch, HIVE-20150.6.patch, 
> HIVE-20150.7.patch, HIVE-20150.8.patch, HIVE-20150.9.patch
>
>
> TopNKey operator is implemented in HIVE-17896, but it needs more work in 
> pushdown implementation. So this issue covers TopNKey pushdown implementation 
> with proper tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22557) Break up DDLSemanticAnalyzer - extract Table constraints analyzers

2019-11-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22557?focusedWorklogId=351471&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-351471
 ]

ASF GitHub Bot logged work on HIVE-22557:
-

Author: ASF GitHub Bot
Created on: 30/Nov/19 00:52
Start Date: 30/Nov/19 00:52
Worklog Time Spent: 10m 
  Work Description: miklosgergely commented on pull request #859: 
HIVE-22557 Break up DDLSemanticAnalyzer - extract Table constraints analyzers
URL: https://github.com/apache/hive/pull/859
 
 
   DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
to refactor it in order to have everything cut into more handleable classes 
under the package  org.apache.hadoop.hive.ql.exec.ddl:
   
   - have a separate class for each analyzers
   - have a package for each operation, containing an analyzer, a description, 
and an operation, so the amount of classes under a package is more manageable
   
   Step #10: extract the table constraints related analyzers from 
DDLSemanticAnalyzer, and move them under the new package.
   
   Also modified the framework to be able to handle `alter table ...` 
statements.
   Also moved some of the Constraint related utilty functions from 
BaseSemanticAnalyzer to the ddl.table.constraint package, into a new 
ConstraintUtils class.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 351471)
Remaining Estimate: 0h
Time Spent: 10m

> Break up DDLSemanticAnalyzer - extract Table constraints analyzers
> --
>
> Key: HIVE-22557
> URL: https://issues.apache.org/jira/browse/HIVE-22557
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22557.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #10: extract the table constraints related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22557) Break up DDLSemanticAnalyzer - extract Table constraints analyzers

2019-11-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22557:
--
Labels: pull-request-available refactor-ddl  (was: refactor-ddl)

> Break up DDLSemanticAnalyzer - extract Table constraints analyzers
> --
>
> Key: HIVE-22557
> URL: https://issues.apache.org/jira/browse/HIVE-22557
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available, refactor-ddl
> Attachments: HIVE-22557.01.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #10: extract the table constraints related analyzers from 
> DDLSemanticAnalyzer, and move them under the new package.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22552) Some q tests uses the same name for test tables

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985225#comment-16985225
 ] 

Hive QA commented on HIVE-22552:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  2m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19683/dev-support/hive-personality.sh
 |
| git revision | master / d645d82 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19683/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Some q tests uses the same name for test tables
> ---
>
> Key: HIVE-22552
> URL: https://issues.apache.org/jira/browse/HIVE-22552
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22552.1.patch, HIVE-22552.2.patch, 
> HIVE-22552.2.patch
>
>
> Some q tests uses the name "t_test" when creating a test table. This can 
> conflict when running the tests parallelly using the same metastore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22552) Some q tests uses the same name for test tables

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985228#comment-16985228
 ] 

Hive QA commented on HIVE-22552:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987168/HIVE-22552.2.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17752 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1]
 (batchId=158)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19683/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19683/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19683/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987168 - PreCommit-HIVE-Build

> Some q tests uses the same name for test tables
> ---
>
> Key: HIVE-22552
> URL: https://issues.apache.org/jira/browse/HIVE-22552
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22552.1.patch, HIVE-22552.2.patch, 
> HIVE-22552.2.patch
>
>
> Some q tests uses the name "t_test" when creating a test table. This can 
> conflict when running the tests parallelly using the same metastore.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985235#comment-16985235
 ] 

Hive QA commented on HIVE-22544:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
27s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19684/dev-support/hive-personality.sh
 |
| git revision | master / d645d82 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19684/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch, 
> HIVE-22544.4.patch, HIVE-22544.5.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22544) Disable null sort order at user level

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985240#comment-16985240
 ] 

Hive QA commented on HIVE-22544:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987167/HIVE-22544.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17752 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19684/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19684/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19684/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987167 - PreCommit-HIVE-Build

> Disable null sort order at user level
> -
>
> Key: HIVE-22544
> URL: https://issues.apache.org/jira/browse/HIVE-22544
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Minor
> Attachments: HIVE-22544.1.patch, HIVE-22544.1.patch, 
> HIVE-22544.2.patch, HIVE-22544.3.patch, HIVE-22544.4.patch, 
> HIVE-22544.4.patch, HIVE-22544.5.patch
>
>
> "sort order" and "null sort order" in ReduceSinkDesc and TopNKeyDesc should 
> not be exposed at user level 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22490) Adding jars with special characters in their path throws error

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985247#comment-16985247
 ] 

Hive QA commented on HIVE-22490:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
23s{color} | {color:blue} ql in master has 1534 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 1 new + 68 unchanged - 3 fixed 
= 69 total (was 71) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19685/dev-support/hive-personality.sh
 |
| git revision | master / d645d82 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19685/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19685/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Adding jars with special characters in their path throws error
> --
>
> Key: HIVE-22490
> URL: https://issues.apache.org/jira/browse/HIVE-22490
> Project: Hive
>  Issue Type: Bug
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22490.0.patch, HIVE-22490.1.patch
>
>
> HIVE-9664 introduced a change that uses URIs in SessionState to handle adding 
> jars or other dependencies in a Hive session, but neglects to handle URIs as 
> actual URIs, i.e. calling toString() on them.
> This resulted in a regression as a path such as /tmp/blabla-[special].jar was 
> working before HIVE-9664 and now it's throwing a URISyntaxException error.
> I think it's fair to make the users provide an URL which is encoded 
> ({{blabla-%5Bspecial%5D.jar)}}, but then the issue of the current 
> implementation will be the inability to find the file on FS, because Hive 
> will look for it in {{blabla-%5Bspecial%5D.jar}} format, instead of 
> blabla-[special].jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22490) Adding jars with special characters in their path throws error

2019-11-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985259#comment-16985259
 ] 

Hive QA commented on HIVE-22490:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12987170/HIVE-22490.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17753 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=285)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19685/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19685/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19685/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12987170 - PreCommit-HIVE-Build

> Adding jars with special characters in their path throws error
> --
>
> Key: HIVE-22490
> URL: https://issues.apache.org/jira/browse/HIVE-22490
> Project: Hive
>  Issue Type: Bug
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
> Attachments: HIVE-22490.0.patch, HIVE-22490.1.patch
>
>
> HIVE-9664 introduced a change that uses URIs in SessionState to handle adding 
> jars or other dependencies in a Hive session, but neglects to handle URIs as 
> actual URIs, i.e. calling toString() on them.
> This resulted in a regression as a path such as /tmp/blabla-[special].jar was 
> working before HIVE-9664 and now it's throwing a URISyntaxException error.
> I think it's fair to make the users provide an URL which is encoded 
> ({{blabla-%5Bspecial%5D.jar)}}, but then the issue of the current 
> implementation will be the inability to find the file on FS, because Hive 
> will look for it in {{blabla-%5Bspecial%5D.jar}} format, instead of 
> blabla-[special].jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22555) Upgrade ORC version to 1.5.8

2019-11-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22555:

Status: Patch Available  (was: In Progress)

> Upgrade ORC version to 1.5.8
> 
>
> Key: HIVE-22555
> URL: https://issues.apache.org/jira/browse/HIVE-22555
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22555.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive currently depends on ORC 1.5.6. We need 1.5.8 upgrade for 
> https://issues.apache.org/jira/browse/HIVE-22499
> ORC-1.5.7 includes https://issues.apache.org/jira/browse/ORC-361 . It causes 
> some tests overriding MemoryManager to fail. These need to be addressed while 
> upgrading.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-22555) Upgrade ORC version to 1.5.8

2019-11-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22555 started by Mustafa Iman.
---
> Upgrade ORC version to 1.5.8
> 
>
> Key: HIVE-22555
> URL: https://issues.apache.org/jira/browse/HIVE-22555
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22555.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive currently depends on ORC 1.5.6. We need 1.5.8 upgrade for 
> https://issues.apache.org/jira/browse/HIVE-22499
> ORC-1.5.7 includes https://issues.apache.org/jira/browse/ORC-361 . It causes 
> some tests overriding MemoryManager to fail. These need to be addressed while 
> upgrading.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22555) Upgrade ORC version to 1.5.8

2019-11-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22555:

Status: Open  (was: Patch Available)

> Upgrade ORC version to 1.5.8
> 
>
> Key: HIVE-22555
> URL: https://issues.apache.org/jira/browse/HIVE-22555
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22555.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive currently depends on ORC 1.5.6. We need 1.5.8 upgrade for 
> https://issues.apache.org/jira/browse/HIVE-22499
> ORC-1.5.7 includes https://issues.apache.org/jira/browse/ORC-361 . It causes 
> some tests overriding MemoryManager to fail. These need to be addressed while 
> upgrading.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22499) LLAP: Add an EncodedReaderOptions to extend ORC impl for options

2019-11-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22499:

Status: Open  (was: Patch Available)

> LLAP: Add an EncodedReaderOptions to extend ORC impl for options
> 
>
> Key: HIVE-22499
> URL: https://issues.apache.org/jira/browse/HIVE-22499
> Project: Hive
>  Issue Type: Bug
>  Components: llap, ORC
>Reporter: Gopal Vijayaraghavan
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22499.2.patch, HIVE-22499.WIP.patch, 
> HIVE-22499.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ORC-570 is an ABI change to the way getFileSystem() by adding an another 
> exception to the implementation.
> To accept and use that change requires waiting for an ORC release, while this 
> patch serves the same purpose though falls back for a retry for 
> FileSystem.get() in case the supplier fails at runtime.
> Also as a side-note, the FS.get() call is always used in the cases where the 
> file is not being read from a cache such as EncodedOrcFile (so the upstream 
> API change might be overkill).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22499) LLAP: Add an EncodedReaderOptions to extend ORC impl for options

2019-11-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22499:

Attachment: HIVE-22499.2.patch
Status: Patch Available  (was: Open)

> LLAP: Add an EncodedReaderOptions to extend ORC impl for options
> 
>
> Key: HIVE-22499
> URL: https://issues.apache.org/jira/browse/HIVE-22499
> Project: Hive
>  Issue Type: Bug
>  Components: llap, ORC
>Reporter: Gopal Vijayaraghavan
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22499.2.patch, HIVE-22499.2.patch, 
> HIVE-22499.WIP.patch, HIVE-22499.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ORC-570 is an ABI change to the way getFileSystem() by adding an another 
> exception to the implementation.
> To accept and use that change requires waiting for an ORC release, while this 
> patch serves the same purpose though falls back for a retry for 
> FileSystem.get() in case the supplier fails at runtime.
> Also as a side-note, the FS.get() call is always used in the cases where the 
> file is not being read from a cache such as EncodedOrcFile (so the upstream 
> API change might be overkill).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22555) Upgrade ORC version to 1.5.8

2019-11-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22555:

Status: Open  (was: Patch Available)

> Upgrade ORC version to 1.5.8
> 
>
> Key: HIVE-22555
> URL: https://issues.apache.org/jira/browse/HIVE-22555
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22555.patch, HIVE-22555.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive currently depends on ORC 1.5.6. We need 1.5.8 upgrade for 
> https://issues.apache.org/jira/browse/HIVE-22499
> ORC-1.5.7 includes https://issues.apache.org/jira/browse/ORC-361 . It causes 
> some tests overriding MemoryManager to fail. These need to be addressed while 
> upgrading.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22555) Upgrade ORC version to 1.5.8

2019-11-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22555:

Attachment: HIVE-22555.patch
Status: Patch Available  (was: Open)

> Upgrade ORC version to 1.5.8
> 
>
> Key: HIVE-22555
> URL: https://issues.apache.org/jira/browse/HIVE-22555
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22555.patch, HIVE-22555.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive currently depends on ORC 1.5.6. We need 1.5.8 upgrade for 
> https://issues.apache.org/jira/browse/HIVE-22499
> ORC-1.5.7 includes https://issues.apache.org/jira/browse/ORC-361 . It causes 
> some tests overriding MemoryManager to fail. These need to be addressed while 
> upgrading.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22555) Upgrade ORC version to 1.5.8

2019-11-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22555:

Attachment: HIVE-22555.2.patch
Status: Patch Available  (was: Open)

> Upgrade ORC version to 1.5.8
> 
>
> Key: HIVE-22555
> URL: https://issues.apache.org/jira/browse/HIVE-22555
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22555.2.patch, HIVE-22555.patch, HIVE-22555.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive currently depends on ORC 1.5.6. We need 1.5.8 upgrade for 
> https://issues.apache.org/jira/browse/HIVE-22499
> ORC-1.5.7 includes https://issues.apache.org/jira/browse/ORC-361 . It causes 
> some tests overriding MemoryManager to fail. These need to be addressed while 
> upgrading.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >