[ 
https://issues.apache.org/jira/browse/HIVE-22731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021199#comment-17021199
 ] 

Hive QA commented on HIVE-22731:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12991394/HIVE-22731.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20274/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20274/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20274/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2020-01-22 15:28:22.744
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-20274/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2020-01-22 15:28:22.747
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at a7ca0a7 HIVE-22663: Quote all table and column names or do not 
quote any (Zoltan Chovan reviewed by Peter Vary)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at a7ca0a7 HIVE-22663: Quote all table and column names or do not 
quote any (Zoltan Chovan reviewed by Peter Vary)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2020-01-22 15:28:24.210
+ rm -rf ../yetus_PreCommit-HIVE-Build-20274
+ mkdir ../yetus_PreCommit-HIVE-Build-20274
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-20274
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-20274/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Trying to apply the patch with -p0
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashTable.java:19
error: repository lacks the necessary blob to fall back on 3-way merge.
error: 
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashTable.java:
 patch does not apply
Trying to apply the patch with -p1
error: 
src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/optimized/VectorMapJoinOptimizedHashTable.java:
 does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashMap.java:
 does not exist in index
error: src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in 
index
error: src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java: 
does not exist in index
error: src/java/org/apache/hadoop/hive/ql/exec/tez/LlapObjectCache.java: does 
not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashTable.java:
 does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/hashtable/VectorMapJoinHashTable.java:
 does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashSet.java:
 does not exist in index
error: src/test/resources/testconfiguration.properties: does not exist in index
error: 
src/java/org/apache/hadoop/hive/llap/io/decode/ColumnVectorProducer.java: does 
not exist in index
error: 
src/java/org/apache/hadoop/hive/llap/io/decode/OrcColumnVectorProducer.java: 
does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashTable.java:
 does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/VectorMapJoinCommonOperator.java:
 does not exist in index
error: 
src/java/org/apache/hadoop/hive/llap/io/decode/OrcEncodedDataConsumer.java: 
does not exist in index
error: 
src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashMultiSet.java:
 does not exist in index
Trying to apply the patch with -p2
error: 
java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/optimized/VectorMapJoinOptimizedHashTable.java:
 does not exist in index
error: 
java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashMap.java:
 does not exist in index
error: java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in index
error: java/org/apache/hadoop/hive/llap/io/api/impl/LlapRecordReader.java: does 
not exist in index
error: java/org/apache/hadoop/hive/ql/exec/tez/LlapObjectCache.java: does not 
exist in index
error: 
java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashTable.java:
 does not exist in index
error: 
java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/hashtable/VectorMapJoinHashTable.java:
 does not exist in index
error: 
java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashSet.java:
 does not exist in index
error: test/resources/testconfiguration.properties: does not exist in index
error: java/org/apache/hadoop/hive/llap/io/decode/ColumnVectorProducer.java: 
does not exist in index
error: java/org/apache/hadoop/hive/llap/io/decode/OrcColumnVectorProducer.java: 
does not exist in index
error: 
java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashTable.java:
 does not exist in index
error: 
java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/VectorMapJoinCommonOperator.java:
 does not exist in index
error: java/org/apache/hadoop/hive/llap/io/decode/OrcEncodedDataConsumer.java: 
does not exist in index
error: 
java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastLongHashMultiSet.java:
 does not exist in index
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-20274
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12991394 - PreCommit-HIVE-Build

> Probe MapJoin hashtables for row level filtering
> ------------------------------------------------
>
>                 Key: HIVE-22731
>                 URL: https://issues.apache.org/jira/browse/HIVE-22731
>             Project: Hive
>          Issue Type: Improvement
>          Components: Hive, llap
>            Reporter: Panagiotis Garefalakis
>            Assignee: Panagiotis Garefalakis
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: HIVE-22731.1.patch, HIVE-22731.WIP.patch, 
> decode_time_bars.pdf
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, RecordReaders such as ORC support filtering at coarser-grained 
> levels, namely: File, Stripe (64 to 256mb), and Row group (10k row) level. 
> They only filter sets of rows if they can guarantee that none of the rows can 
> pass a filter (usually given as searchable argument).
> However, a significant amount of time can be spend decoding rows with 
> multiple columns that are not even used in the final result. See figure where 
> original is what happens today and in LazyDecode we skip decoding rows that 
> do not match the key.
> To enable a more fine-grained filtering in the particular case of a MapJoin 
> we could utilize the key HashTable created from the smaller table to skip 
> deserializing row columns at the larger table that do not match any key and 
> thus save CPU time. 
> This Jira investigates this direction. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to