[ https://issues.apache.org/jira/browse/HIVE-11110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15041209#comment-15041209 ]
Hive QA commented on HIVE-11110: -------------------------------- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12775477/HIVE-11110.29.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6229/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/6229/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-6229/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.7.0_45-cloudera ]] + export JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + JAVA_HOME=/usr/java/jdk1.7.0_45-cloudera + export PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + PATH=/usr/java/jdk1.7.0_45-cloudera/bin/:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-maven-3.0.5/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-TRUNK-Build-6229/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 22fc397 HIVE-12444 Global Limit optimization on ACID table without base directory may throw exception ADDENDUM (Wei Zheng via Eugene Koifman) + git clean -f -d + git checkout master Already on 'master' + git reset --hard origin/master HEAD is now at 22fc397 HIVE-12444 Global Limit optimization on ACID table without base directory may throw exception ADDENDUM (Wei Zheng via Eugene Koifman) + git merge --ff-only origin/master Already up-to-date. + git gc + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12775477 - PreCommit-HIVE-TRUNK-Build > Reorder applyPreJoinOrderingTransforms, add NotNULL/FilterMerge rules, > improve Filter selectivity estimation > ------------------------------------------------------------------------------------------------------------ > > Key: HIVE-11110 > URL: https://issues.apache.org/jira/browse/HIVE-11110 > Project: Hive > Issue Type: Bug > Components: CBO > Reporter: Jesus Camacho Rodriguez > Assignee: Laljo John Pullokkaran > Attachments: HIVE-11110-10.patch, HIVE-11110-11.patch, > HIVE-11110-12.patch, HIVE-11110-branch-1.2.patch, HIVE-11110.1.patch, > HIVE-11110.13.patch, HIVE-11110.14.patch, HIVE-11110.15.patch, > HIVE-11110.16.patch, HIVE-11110.17.patch, HIVE-11110.18.patch, > HIVE-11110.19.patch, HIVE-11110.2.patch, HIVE-11110.20.patch, > HIVE-11110.21.patch, HIVE-11110.22.patch, HIVE-11110.23.patch, > HIVE-11110.24.patch, HIVE-11110.25.patch, HIVE-11110.26.patch, HIVE-11110.27, > HIVE-11110.27.patch, HIVE-11110.28.patch, HIVE-11110.29.patch, > HIVE-11110.4.patch, HIVE-11110.5.patch, HIVE-11110.6.patch, > HIVE-11110.7.patch, HIVE-11110.8.patch, HIVE-11110.9.patch, > HIVE-11110.91.patch, HIVE-11110.92.patch, HIVE-11110.patch > > > Query > {code} > select count(*) > from store_sales > ,store_returns > ,date_dim d1 > ,date_dim d2 > where d1.d_quarter_name = '2000Q1' > and d1.d_date_sk = ss_sold_date_sk > and ss_customer_sk = sr_customer_sk > and ss_item_sk = sr_item_sk > and ss_ticket_number = sr_ticket_number > and sr_returned_date_sk = d2.d_date_sk > and d2.d_quarter_name in ('2000Q1','2000Q2','2000Q3’); > {code} > The store_sales table is partitioned on ss_sold_date_sk, which is also used > in a join clause. The join clause should add a filter “filterExpr: > ss_sold_date_sk is not null”, which should get pushed the MetaStore when > fetching the stats. Currently this is not done in CBO planning, which results > in the stats from __HIVE_DEFAULT_PARTITION__ to be fetched and considered in > the optimization phase. In particular, this increases the NDV for the join > columns and may result in wrong planning. > Including HiveJoinAddNotNullRule in the optimization phase solves this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)