[jira] [Commented] (HIVE-18271) Druid Insert into fails with exception when committing files
[ https://issues.apache.org/jira/browse/HIVE-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290476#comment-16290476 ] Hive QA commented on HIVE-18271: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901961/HIVE-18271.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11529 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8234/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8234/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8234/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 16 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12901961 - PreCommit-HIVE-Build > Druid Insert into fails with exception when committing files > > > Key: HIVE-18271 > URL: https://issues.apache.org/jira/browse/HIVE-18271 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa > Fix For: 3.0.0 > > Attachments: HIVE-18271.2.patch, HIVE-18271.patch > > > Exception - > {code} > 03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved)' > org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty > to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved > at org.apache.hadoop.hive.ql.exec.Utilities.rename(Utilities.java:1129) > at > org.apache.hadoop.hive.ql.exec.Utilities.mvFileToFinalPath(Utilities.java:1460) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1135) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:765) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:770) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:588) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:286) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242) > at >
[jira] [Updated] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Choi JaeHwan updated HIVE-18268: Status: Open (was: Patch Available) > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.1.patch, HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Choi JaeHwan updated HIVE-18268: Status: Patch Available (was: Open) > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.1.patch, HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Choi JaeHwan updated HIVE-18268: Attachment: HIVE-18268.1.patch > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.1.patch, HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Choi JaeHwan updated HIVE-18268: Attachment: (was: HIVE-18268.1.ptach) > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18271) Druid Insert into fails with exception when committing files
[ https://issues.apache.org/jira/browse/HIVE-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290417#comment-16290417 ] Hive QA commented on HIVE-18271: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 8ab523b | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8234/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Druid Insert into fails with exception when committing files > > > Key: HIVE-18271 > URL: https://issues.apache.org/jira/browse/HIVE-18271 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa > Fix For: 3.0.0 > > Attachments: HIVE-18271.2.patch, HIVE-18271.patch > > > Exception - > {code} > 03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved)' > org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty > to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved > at org.apache.hadoop.hive.ql.exec.Utilities.rename(Utilities.java:1129) > at > org.apache.hadoop.hive.ql.exec.Utilities.mvFileToFinalPath(Utilities.java:1460) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1135) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:765) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:770) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:588) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:286) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) > at
[jira] [Commented] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290404#comment-16290404 ] Hive QA commented on HIVE-18248: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901963/HIVE-18248.1.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11533 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[change_hive_hdfs_session_path] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[change_hive_local_session_path] (batchId=92) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[change_hive_tmp_table_space] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=236) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8233/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8233/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8233/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 20 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12901963 - PreCommit-HIVE-Build > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18248.1.patch > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290386#comment-16290386 ] Hive QA commented on HIVE-18248: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 8ab523b | | Default Java | 1.8.0_111 | | modules | C: common ql . itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8233/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18248.1.patch > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18272) Fix check-style violations in subquery code
[ https://issues.apache.org/jira/browse/HIVE-18272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290346#comment-16290346 ] Hive QA commented on HIVE-18272: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901957/HIVE-18272.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11134 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testApplyPlanQpChanges (batchId=285) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8232/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8232/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8232/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 16 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12901957 - PreCommit-HIVE-Build > Fix check-style violations in subquery code > --- > > Key: HIVE-18272 > URL: https://issues.apache.org/jira/browse/HIVE-18272 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18272.1.patch > > > Following files have quite a few checkstyle violations: > {{HiveSubQRemoveRelBuilder.java}} > {{HiveRelDecorrelator.java}} > {{HiveSubQueryRemoveRule.java}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18272) Fix check-style violations in subquery code
[ https://issues.apache.org/jira/browse/HIVE-18272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290324#comment-16290324 ] Hive QA commented on HIVE-18272: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s{color} | {color:red} ql: The patch generated 8 new + 25 unchanged - 534 fixed = 33 total (was 559) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 8ab523b | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8232/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8232/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix check-style violations in subquery code > --- > > Key: HIVE-18272 > URL: https://issues.apache.org/jira/browse/HIVE-18272 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18272.1.patch > > > Following files have quite a few checkstyle violations: > {{HiveSubQRemoveRelBuilder.java}} > {{HiveRelDecorrelator.java}} > {{HiveSubQueryRemoveRule.java}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18258) Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken
[ https://issues.apache.org/jira/browse/HIVE-18258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290310#comment-16290310 ] Hive QA commented on HIVE-18258: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901955/HIVE-18258.03.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11530 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=209) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8231/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8231/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8231/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 16 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12901955 - PreCommit-HIVE-Build > Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is > broken > - > > Key: HIVE-18258 > URL: https://issues.apache.org/jira/browse/HIVE-18258 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18258.01.patch, HIVE-18258.02.patch, > HIVE-18258.03.patch > > > See Q file. Duplicate columns in key are not handled correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Choi JaeHwan updated HIVE-18268: Status: Patch Available (was: Open) > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.1.ptach, HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Choi JaeHwan updated HIVE-18268: Status: Open (was: Patch Available) > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.1.ptach, HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Choi JaeHwan updated HIVE-18268: Attachment: HIVE-18268.1.ptach > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.1.ptach, HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290297#comment-16290297 ] Choi JaeHwan commented on HIVE-18268: - [~asherman]I wanted to make sure the string was not split when there was '?' in the string inside double quotes through testing. The test code has been modified. > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18003) add explicit jdbc connection string args for mappings
[ https://issues.apache.org/jira/browse/HIVE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18003: Attachment: HIVE-18003.05.patch Another rebase > add explicit jdbc connection string args for mappings > - > > Key: HIVE-18003 > URL: https://issues.apache.org/jira/browse/HIVE-18003 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18003.01.patch, HIVE-18003.02.patch, > HIVE-18003.03.patch, HIVE-18003.04.patch, HIVE-18003.05.patch, > HIVE-18003.patch > > > 1) Force using unmanaged/containers execution. > 2) Optional - specify pool name (config setting to gate this, disabled by > default?). > In phase 2 (or 4?) we might allow #2 to be used by a user to choose between > multiple mappings if they have multiple pools they could be mapped to (i.e. > to change the ordering essentially). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18257) implement scheduling policy configuration instead of hardcoding fair scheduling
[ https://issues.apache.org/jira/browse/HIVE-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18257: Attachment: HIVE-18257.patch [~prasanth_j] [~harishjp] can you take a look at this one too? :) > implement scheduling policy configuration instead of hardcoding fair > scheduling > --- > > Key: HIVE-18257 > URL: https://issues.apache.org/jira/browse/HIVE-18257 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18257.patch > > > Not sure it makes sense to actually make it pluggable. At least the standard > ones will be an enum; we don't expect people to implement custom classes - > phase 2 if someone wants to -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18257) implement scheduling policy configuration instead of hardcoding fair scheduling
[ https://issues.apache.org/jira/browse/HIVE-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18257: Status: Patch Available (was: Open) > implement scheduling policy configuration instead of hardcoding fair > scheduling > --- > > Key: HIVE-18257 > URL: https://issues.apache.org/jira/browse/HIVE-18257 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18257.patch > > > Not sure it makes sense to actually make it pluggable. At least the standard > ones will be an enum; we don't expect people to implement custom classes - > phase 2 if someone wants to -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18258) Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken
[ https://issues.apache.org/jira/browse/HIVE-18258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290269#comment-16290269 ] Hive QA commented on HIVE-18258: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s{color} | {color:red} ql: The patch generated 6 new + 8 unchanged - 4 fixed = 14 total (was 12) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 8ab523b | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8231/yetus/diff-checkstyle-ql.txt | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8231/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is > broken > - > > Key: HIVE-18258 > URL: https://issues.apache.org/jira/browse/HIVE-18258 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18258.01.patch, HIVE-18258.02.patch, > HIVE-18258.03.patch > > > See Q file. Duplicate columns in key are not handled correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver
[ https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290256#comment-16290256 ] liyunzhang commented on HIVE-18148: --- {code} grep -C2 "hive.auto.convert.join" $HIVE_SOURCE/itests/qtest/target/testconf/spark/yarn-client/hive-site.xml 160- 161- 162: hive.auto.convert.join 163- false 164- Whether Hive enable the optimization about converting common join into mapjoin based on the input file size {code} when running spark_dynamic_partition_pruning_5.q, it will use above hive-site.xml and {{hive.auto.convert.join}} is false. > NPE in SparkDynamicPartitionPruningResolver > --- > > Key: HIVE-18148 > URL: https://issues.apache.org/jira/browse/HIVE-18148 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-18148.1.patch > > > The stack trace is: > {noformat} > 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] > ql.Driver: FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125) > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74) > at > org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568) > {noformat} > At this stage, there shouldn't be a DPP sink whose target map work is null. > The root cause seems to be a malformed operator tree generated by > SplitOpTreeForDPP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18257) implement scheduling policy configuration instead of hardcoding fair scheduling
[ https://issues.apache.org/jira/browse/HIVE-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18257: Description: Not sure it makes sense to actually make it pluggable. At least the standard ones will be an enum; we don't expect people to implement custom classes - phase 2 if someone wants to (was: Not sure it makes sense to actually make it pluggable. No good way to plug it ) > implement scheduling policy configuration instead of hardcoding fair > scheduling > --- > > Key: HIVE-18257 > URL: https://issues.apache.org/jira/browse/HIVE-18257 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > Not sure it makes sense to actually make it pluggable. At least the standard > ones will be an enum; we don't expect people to implement custom classes - > phase 2 if someone wants to -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18257) implement scheduling policy configuration instead of hardcoding fair scheduling
[ https://issues.apache.org/jira/browse/HIVE-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18257: Summary: implement scheduling policy configuration instead of hardcoding fair scheduling (was: implement scheduling policy interface; move the fair policy code) > implement scheduling policy configuration instead of hardcoding fair > scheduling > --- > > Key: HIVE-18257 > URL: https://issues.apache.org/jira/browse/HIVE-18257 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18257) implement scheduling policy configuration instead of hardcoding fair scheduling
[ https://issues.apache.org/jira/browse/HIVE-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18257: Description: Not sure it makes sense to actually make it pluggable. No good way to plug it > implement scheduling policy configuration instead of hardcoding fair > scheduling > --- > > Key: HIVE-18257 > URL: https://issues.apache.org/jira/browse/HIVE-18257 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > Not sure it makes sense to actually make it pluggable. No good way to plug it -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18003) add explicit jdbc connection string args for mappings
[ https://issues.apache.org/jira/browse/HIVE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290250#comment-16290250 ] Hive QA commented on HIVE-18003: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901948/HIVE-18003.04.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8230/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8230/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8230/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-12-14 02:45:11.169 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-8230/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-12-14 02:45:11.172 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 8ab523b HIVE-18241: Query with LEFT SEMI JOIN producing wrong result (Vineet Garg, reviewed by Jesus Camacho Rodriguez) + git clean -f -d Removing ${project.basedir}/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 8ab523b HIVE-18241: Query with LEFT SEMI JOIN producing wrong result (Vineet Garg, reviewed by Jesus Camacho Rodriguez) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-12-14 02:45:14.947 + rm -rf ../yetus + mkdir ../yetus + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-8230/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:137 Falling back to three-way merge... Applied patch to 'jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java' with conflicts. Going to apply patch with: git apply -p0 /data/hiveptest/working/scratch/build.patch:400: trailing whitespace. error: patch failed: jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:137 Falling back to three-way merge... Applied patch to 'jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java' with conflicts. U jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java warning: 1 line adds whitespace errors. + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12901948 - PreCommit-HIVE-Build > add explicit jdbc connection string args for mappings > - > > Key: HIVE-18003 > URL: https://issues.apache.org/jira/browse/HIVE-18003 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18003.01.patch, HIVE-18003.02.patch, > HIVE-18003.03.patch, HIVE-18003.04.patch, HIVE-18003.patch > > > 1) Force using unmanaged/containers execution. > 2) Optional - specify pool name (config setting to gate this, disabled by > default?). > In phase 2 (or 4?) we might allow #2 to be used by a user to choose between > multiple mappings if they have multiple pools they could be mapped to (i.e. > to change the ordering essentially). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18257) implement scheduling policy interface; move the fair policy code
[ https://issues.apache.org/jira/browse/HIVE-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-18257: --- Assignee: Sergey Shelukhin > implement scheduling policy interface; move the fair policy code > > > Key: HIVE-18257 > URL: https://issues.apache.org/jira/browse/HIVE-18257 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290247#comment-16290247 ] Hive QA commented on HIVE-18269: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901931/HIVE-18269.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 39 failed/errored test(s), 10764 tests executed *Failed tests:* {noformat} TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=39) [unionall_join_nullconstant.q,tez_join.q,cbo_rp_windowing.q,orc_merge11.q,udf_float.q,udf_sentences.q,bucketmapjoin13.q,udf_split.q,load_dyn_part9.q,auto_join16.q,vector_reduce2.q,tez_joins_explain.q,udf_replace.q,create_or_replace_view.q,alter_partition_clusterby_sortby.q,exchange_partition2.q,vector_aggregate_9.q,udf_greaterthan.q,exim_15_external_part.q,delete_orig_table.q,index_auto_unused.q,groupby_position.q,llap_acid_fast.q,acid_subquery.q,nullformatCTAS.q,decimal_join2.q,join21.q,cbo_rp_groupby3_noskew_multi_distinct.q,transform1.q,delete_where_partitioned.q] TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=58) [load_dyn_part2.q,llap_uncompressed.q,smb_mapjoin_7.q,mapjoin46.q,temp_table_external.q,ctas_colname.q,index_auto_empty.q,index_in_db.q,subquery_in_having.q,vectorized_string_funcs.q,vectorization_1.q,stats_ppr_all.q,join0.q,timestamptz_1.q,decimal_6.q,udf_sign.q,alter_file_format.q,vector_udf1.q,select_unquote_not.q,join14_hadoop20.q,constprog_when_case.q,druid_timeseries.q,avro_change_schema.q,create_udaf.q,array_size_estimation.q,merge3.q,lateral_view_onview.q,groupby4_map_skew.q,ppd_constant_expr.q,drop_table_with_stats.q] TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=74) [auto_join24.q,parquet_schema_evolution.q,udf_to_string.q,vectorized_distinct_gby.q,mapreduce8.q,constantfolding.q,groupby8.q,serde_opencsv.q,druidmini_test1.q,tez_vector_dynpart_hashjoin_1.q,groupby_multi_insert_common_distinct.q,join6.q,expr_cached.q,script_pipe.q,udf_bitwise_or.q,multiMapJoin2.q,filter_join_breaktask.q,udf_regexp.q,udf_xpath_long.q,ppd_multi_insert.q,alter_merge_2_orc.q,join_thrift.q,pointlookup4.q,union4.q,load_fs2.q,llap_text.q,input42.q,udf_mask.q,dynamic_semijoin_reduction_3.q,stats_aggregator_error_1.q] TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=8) [dp_counter_mm.q,llap_reader.q,columnstats_tbllvl.q,insert_into_with_schema.q,groupby_map_ppr.q,input_part1.q,convert_enum_to_string.q,union14.q,subquery_unqual_corr_expr.q,annotate_stats_filter.q,sort_merge_join_desc_8.q,udf_format_number.q,dynamic_semijoin_reduction_sw.q,alter_change_db_location.q,udf_minute.q,groupby_sort_test_1.q,authorization_update.q,authorization_cli_createtab_noauthzapi.q,tez_insert_overwrite_local_directory_1.q,testSetQueryString.q,parquet_ppd_partition.q,nested_complex.q,alter_table_serde.q,drop_view.q,exim_09_part_spec_nonoverlap.q,delimiter.q,udaf_collect_set.q,authorization_view_4.q,groupby_sort_skew_1_23.q,skewjoinopt12.q] TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=80) [groupby6_map.q,tez_union_group_by.q,llap_acid.q,groupby_nullvalues.q,join15.q,msck_repair_0.q,msck_repair_1.q,udf_round_2.q,setop_no_distinct.q,authorization_reset.q,vectorization_decimal_date.q,windowing_columnPruning.q,create_nested_type.q,stats13.q,stats_publisher_error_1.q,groupby_sort_3.q,partInit.q,auto_join13.q,partition_decode_name.q,date_1.q,join_acid_non_acid.q,udf9.q,vector_groupby_grouping_window.q,auto_join21.q,join_view.q,input_lazyserde2.q,encryption_insert_partition_dynamic.q,crtseltbl_serdeprops.q,fold_eq_with_case_when.q,dynamic_partition_skip_default.q] TestMiniLlapCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=147) [mapreduce2.q,orc_llap_counters1.q,bucket6.q,insert_into1.q,empty_dir_in_table.q,orc_merge1.q,parquet_types_vectorization.q,orc_merge_diff_fs.q,llap_stats.q,llapdecider.q,load_hdfs_file_with_space_in_the_name.q,llap_nullscan.q,orc_ppd_basic.q,rcfile_merge4.q,orc_merge3.q] TestMiniLlapCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=148) [acid_bucket_pruning.q] TestMiniLlapCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=149) [intersect_all.q,unionDistinct_1.q,orc_ppd_schema_evol_3a.q,table_nonprintable.q,tez_union_dynamic_partition.q,tez_union_dynamic_partition_2.q,temp_table_external.q,global_limit.q,llap_udf.q,schemeAuthority.q,cte_2.q,rcfile_createas1.q,dynamic_partition_pruning_2.q,intersect_merge.q,parallel_colstats.q] TestMiniLlapCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=150)
[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver
[ https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290243#comment-16290243 ] liyunzhang commented on HIVE-18148: --- [~lirui]: If this is found in common join please add comments to describe and modify some comments, I see following in the patch {code} // If a branch is of pattern "RS - MAPJOIN", it means we're on the "small table" side of a // map join. Since there will be a job boundary, we shouldn't look for DPPs beyond this. private boolean stopAtMJ(Operator op) { } {code} I guess there is no possiblity to see MAPJOIN in this situation. > NPE in SparkDynamicPartitionPruningResolver > --- > > Key: HIVE-18148 > URL: https://issues.apache.org/jira/browse/HIVE-18148 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-18148.1.patch > > > The stack trace is: > {noformat} > 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] > ql.Driver: FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125) > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74) > at > org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568) > {noformat} > At this stage, there shouldn't be a DPP sink whose target map work is null. > The root cause seems to be a malformed operator tree generated by > SplitOpTreeForDPP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18153) refactor reopen and file management in TezTask
[ https://issues.apache.org/jira/browse/HIVE-18153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18153: Attachment: HIVE-18153.06.patch > refactor reopen and file management in TezTask > -- > > Key: HIVE-18153 > URL: https://issues.apache.org/jira/browse/HIVE-18153 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18153.01.patch, HIVE-18153.02.patch, > HIVE-18153.03.patch, HIVE-18153.04.patch, HIVE-18153.05.patch, > HIVE-18153.06.patch, HIVE-18153.patch > > > TezTask reopen relies on getting the same session object in terms of setup; > WM reopen returns a new session from the pool. > The former has the advantage of not having to reupload files and stuff... but > the object reuse results in a lot of ugly code, and also reopen might be > slower on average with the session pool than just getting a session from the > pool. Either WM needs to do the object-preserving reopen, or TezTask needs to > be refactored. It looks like DAG would have to be rebuilt to do the latter > because of some paths tied to a directory of the old session. Let me see if I > can get around that; if not we can do the former; and then if the former > results in too much ugly code in WM to account for object reuse for different > Tez client I'd do the latter anyway since it's a failure path :) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18273) add LLAP-level counters for WM
[ https://issues.apache.org/jira/browse/HIVE-18273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18273: Description: On query fragment level (like IO counters) time queued as guaranteed; time running as guaranteed; time running as speculative. > add LLAP-level counters for WM > -- > > Key: HIVE-18273 > URL: https://issues.apache.org/jira/browse/HIVE-18273 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > On query fragment level (like IO counters) > time queued as guaranteed; > time running as guaranteed; > time running as speculative. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18273) add LLAP-level counters for WM
[ https://issues.apache.org/jira/browse/HIVE-18273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-18273: --- > add LLAP-level counters for WM > -- > > Key: HIVE-18273 > URL: https://issues.apache.org/jira/browse/HIVE-18273 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character
[ https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290231#comment-16290231 ] Hui Huang commented on HIVE-18265: -- Ok, I'll add the test cases today. > desc formatted/extended or show create table can not fully display the result > when field or table comment contains tab character > > > Key: HIVE-18265 > URL: https://issues.apache.org/jira/browse/HIVE-18265 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 3.0.0 >Reporter: Hui Huang >Assignee: Hui Huang > Fix For: 3.0.0 > > Attachments: HIVE-18265.patch > > > Here are some examples: > create table test_comment (id1 string comment 'full_\tname1', id2 string > comment 'full_\tname2', id3 string comment 'full_\tname3') stored as textfile; > When execute `show create table test_comment`, we can see the following > content in the console, > {quote} > createtab_stmt > CREATE TABLE `test_comment`( > `id1` string COMMENT 'full_ > `id2` string COMMENT 'full_ > `id3` string COMMENT 'full_ > ROW FORMAT SERDE > 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' > STORED AS INPUTFORMAT > 'org.apache.hadoop.mapred.TextInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' > LOCATION > 'hdfs://xxx/user/huanghui/warehouse/huanghuitest.db/test_comment' > TBLPROPERTIES ( > 'transient_lastDdlTime'='1513095570') > {quote} > And the output of `desc formatted table ` is a little similar, > {quote} > col_name data_type comment > \# col_name data_type comment > id1 string full_ > id2 string full_ > id3 string full_ > \# Detailed Table Information > (ignore)... > {quote} > When execute `desc extended test_comment`, the problem is more obvious, > {quote} > col_name data_type comment > id1 string full_ > id2 string full_ > id3 string full_ > Detailed Table InformationTable(tableName:test_comment, > dbName:huanghuitest, owner:huanghui, createTime:1513095570, lastAccessTime:0, > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id1, type:string, > comment:full_name1), FieldSchema(name:id2, type:string, comment:full_ > {quote} > *the rest of the content is lost*. > The content is not really lost, it's just can not display normal. Because > hive store the result in LazyStruct, and LazyStruct use '\t' as field > separator: > {code:java} > // LazyStruct.java#parse() > // Go through all bytes in the byte[] > while (fieldByteEnd <= structByteEnd) { > if (fieldByteEnd == structByteEnd || bytes[fieldByteEnd] == separator) { > // Reached the end of a field? > if (lastColumnTakesRest && fieldId == fields.length - 1) { > fieldByteEnd = structByteEnd; > } > startPosition[fieldId] = fieldByteBegin; > fieldId++; > if (fieldId == fields.length || fieldByteEnd == structByteEnd) { > // All fields have been parsed, or bytes have been parsed. > // We need to set the startPosition of fields.length to ensure we > // can use the same formula to calculate the length of each field. > // For missing fields, their starting positions will all be the > same, > // which will make their lengths to be -1 and uncheckedGetField will > // return these fields as NULLs. > for (int i = fieldId; i <= fields.length; i++) { > startPosition[i] = fieldByteEnd + 1; > } > break; > } > fieldByteBegin = fieldByteEnd + 1; > fieldByteEnd++; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18095) add a unmanaged flag to triggers (applies to container based sessions)
[ https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290227#comment-16290227 ] Sergey Shelukhin commented on HIVE-18095: - Btw this only adds unmanaged flag which is more clear and can coexist with pools. I am not sure global flag makes sense... presumably most triggers will be pool specific. Maybe we can add it in phase 2 :) > add a unmanaged flag to triggers (applies to container based sessions) > -- > > Key: HIVE-18095 > URL: https://issues.apache.org/jira/browse/HIVE-18095 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18095.nogen.patch, HIVE-18095.patch > > > cc [~prasanth_j] > It should be impossible to attach global triggers for pools. Setting global > flag should probably automatically remove attachments to pools. > Global triggers would only support actions that Tez supports (for simplicity; > also, for now, move doesn't make a lot of sense because the trigger would > apply again after the move). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18095) add a unmanaged flag to triggers (applies to container based sessions)
[ https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18095: Summary: add a unmanaged flag to triggers (applies to container based sessions) (was: add a global flag to triggers (applies to all WM pools & container based sessions)) > add a unmanaged flag to triggers (applies to container based sessions) > -- > > Key: HIVE-18095 > URL: https://issues.apache.org/jira/browse/HIVE-18095 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18095.nogen.patch, HIVE-18095.patch > > > cc [~prasanth_j] > It should be impossible to attach global triggers for pools. Setting global > flag should probably automatically remove attachments to pools. > Global triggers would only support actions that Tez supports (for simplicity; > also, for now, move doesn't make a lot of sense because the trigger would > apply again after the move). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18095) add a global flag to triggers (applies to all WM pools & container based sessions)
[ https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18095: Attachment: (was: HIVE-18905.patch) > add a global flag to triggers (applies to all WM pools & container based > sessions) > -- > > Key: HIVE-18095 > URL: https://issues.apache.org/jira/browse/HIVE-18095 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18095.nogen.patch, HIVE-18095.patch > > > cc [~prasanth_j] > It should be impossible to attach global triggers for pools. Setting global > flag should probably automatically remove attachments to pools. > Global triggers would only support actions that Tez supports (for simplicity; > also, for now, move doesn't make a lot of sense because the trigger would > apply again after the move). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18095) add a global flag to triggers (applies to all WM pools & container based sessions)
[ https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18095: Attachment: HIVE-18095.nogen.patch > add a global flag to triggers (applies to all WM pools & container based > sessions) > -- > > Key: HIVE-18095 > URL: https://issues.apache.org/jira/browse/HIVE-18095 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18095.nogen.patch, HIVE-18095.patch > > > cc [~prasanth_j] > It should be impossible to attach global triggers for pools. Setting global > flag should probably automatically remove attachments to pools. > Global triggers would only support actions that Tez supports (for simplicity; > also, for now, move doesn't make a lot of sense because the trigger would > apply again after the move). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18095) add a global flag to triggers (applies to all WM pools & container based sessions)
[ https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18095: Attachment: HIVE-18095.patch > add a global flag to triggers (applies to all WM pools & container based > sessions) > -- > > Key: HIVE-18095 > URL: https://issues.apache.org/jira/browse/HIVE-18095 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18095.nogen.patch, HIVE-18095.patch > > > cc [~prasanth_j] > It should be impossible to attach global triggers for pools. Setting global > flag should probably automatically remove attachments to pools. > Global triggers would only support actions that Tez supports (for simplicity; > also, for now, move doesn't make a lot of sense because the trigger would > apply again after the move). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18095) add a global flag to triggers (applies to all WM pools & container based sessions)
[ https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18095: Status: Patch Available (was: Open) > add a global flag to triggers (applies to all WM pools & container based > sessions) > -- > > Key: HIVE-18095 > URL: https://issues.apache.org/jira/browse/HIVE-18095 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18905.patch > > > cc [~prasanth_j] > It should be impossible to attach global triggers for pools. Setting global > flag should probably automatically remove attachments to pools. > Global triggers would only support actions that Tez supports (for simplicity; > also, for now, move doesn't make a lot of sense because the trigger would > apply again after the move). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18095) add a global flag to triggers (applies to all WM pools & container based sessions)
[ https://issues.apache.org/jira/browse/HIVE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18095: Attachment: HIVE-18905.patch Patch including 2 other patches (disable WM and replace/clone RP) to avoid conflicts, and also generated code > add a global flag to triggers (applies to all WM pools & container based > sessions) > -- > > Key: HIVE-18095 > URL: https://issues.apache.org/jira/browse/HIVE-18095 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18905.patch > > > cc [~prasanth_j] > It should be impossible to attach global triggers for pools. Setting global > flag should probably automatically remove attachments to pools. > Global triggers would only support actions that Tez supports (for simplicity; > also, for now, move doesn't make a lot of sense because the trigger would > apply again after the move). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18209) Fix API call in VectorizedListColumnReader to get value from BytesColumnVector
[ https://issues.apache.org/jira/browse/HIVE-18209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290189#comment-16290189 ] Colin Ma commented on HIVE-18209: - [~Ferd], from the yetus result, it's the indentation problem, but it's the same as others, I think the problems can be ignored. > Fix API call in VectorizedListColumnReader to get value from BytesColumnVector > -- > > Key: HIVE-18209 > URL: https://issues.apache.org/jira/browse/HIVE-18209 > Project: Hive > Issue Type: Sub-task >Reporter: Colin Ma >Assignee: Colin Ma > Attachments: HIVE-18209.001.patch, HIVE-18209.002.patch, > HIVE-18209.003.patch > > > With the API BytesColumnVector.setVal(), the isRepeating attribute can't be > set correctly if ListColumnVector.child is BytesColumnVector. > BytesColumnVector.setRef() should be used to avoid this problem. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290173#comment-16290173 ] Hive QA commented on HIVE-18269: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s{color} | {color:red} llap-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 18s{color} | {color:red} llap-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 18s{color} | {color:red} llap-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} common: The patch generated 2 new + 931 unchanged - 0 fixed = 933 total (was 931) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} llap-server: The patch generated 0 new + 31 unchanged - 1 fixed = 31 total (was 32) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 8ab523b | | Default Java | 1.8.0_111 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-8229/yetus/patch-mvninstall-llap-server.txt | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-8229/yetus/patch-compile-llap-server.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-8229/yetus/patch-compile-llap-server.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8229/yetus/diff-checkstyle-common.txt | | modules | C: common llap-server U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8229/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > LLAP: Fast llap io with slow processing pipeline can lead to OOM > > > Key: HIVE-18269 > URL: https://issues.apache.org/jira/browse/HIVE-18269 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18269.1.patch, Screen Shot 2017-12-13 at 1.15.16 > AM.png > > > pendingData linked list in Llap IO elevator (LlapRecordReader.java) may grow > indefinitely when Llap IO is faster than processing pipeline. Since we don't > have backpressure to slow down the IO, this can lead to indefinite growth of > pending data leading to severe GC pressure and eventually lead to OOM. > This specific instance of LLAP was running on HDFS on top of EBS volume >
[jira] [Updated] (HIVE-18247) Use DB auto-increment for indexes
[ https://issues.apache.org/jira/browse/HIVE-18247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-18247: -- Status: Patch Available (was: Open) > Use DB auto-increment for indexes > - > > Key: HIVE-18247 > URL: https://issues.apache.org/jira/browse/HIVE-18247 > Project: Hive > Issue Type: Bug > Components: Hive, Metastore >Affects Versions: 3.0.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov > Labels: datanucleus, perfomance > Attachments: HIVE-18247.01.patch > > > I initially noticed this problem in Apache Sentry - see SENTRY-1960. Hive has > the same issue. DataNucleus uses SEQUENCE table to allocate IDs which > requires raw locks on multiple tables during transactions and this creates > scalability problems. > Instead DN should rely on DB auto-increment mechanisms which are much more > scalable. > See SENTRY-1960 for extra details. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18247) Use DB auto-increment for indexes
[ https://issues.apache.org/jira/browse/HIVE-18247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-18247: -- Attachment: HIVE-18247.01.patch > Use DB auto-increment for indexes > - > > Key: HIVE-18247 > URL: https://issues.apache.org/jira/browse/HIVE-18247 > Project: Hive > Issue Type: Bug > Components: Hive, Metastore >Affects Versions: 3.0.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov > Labels: datanucleus, perfomance > Attachments: HIVE-18247.01.patch > > > I initially noticed this problem in Apache Sentry - see SENTRY-1960. Hive has > the same issue. DataNucleus uses SEQUENCE table to allocate IDs which > requires raw locks on multiple tables during transactions and this creates > scalability problems. > Instead DN should rely on DB auto-increment mechanisms which are much more > scalable. > See SENTRY-1960 for extra details. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-14498) Freshness period for query rewriting using materialized views
[ https://issues.apache.org/jira/browse/HIVE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290162#comment-16290162 ] Hive QA commented on HIVE-14498: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901916/HIVE-14498.01.patch {color:green}SUCCESS:{color} +1 due to 10 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 80 failed/errored test(s), 11505 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite] (batchId=246) org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] (batchId=110) org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.testListener (batchId=218) org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.testMetaConfDuplicateNotification (batchId=218) org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.testMetaConfNotifyListenersClosingClient (batchId=218) org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.testMetaConfNotifyListenersNonClosingClient (batchId=218) org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.testMetaConfSameHandler (batchId=218) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testTableOps (batchId=202) org.apache.hadoop.hive.ql.metadata.TestHive.testTable (batchId=276) org.apache.hadoop.hive.ql.metadata.TestHive.testThriftTable (batchId=276) org.apache.hadoop.hive.ql.metadata.TestHiveRemote.testTable (batchId=277) org.apache.hadoop.hive.ql.metadata.TestHiveRemote.testThriftTable (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testAlters (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testAuthForNotificationAPIs (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBasic (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBasicWithCM (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBootstrapLoadOnExistingDb (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBootstrapWithConcurrentDropPartition (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBootstrapWithConcurrentDropTable (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBootstrapWithConcurrentRename (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testBootstrapWithDropPartitionedTable (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testCMConflict (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConcatenatePartitionedTable (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConcatenateTable (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDeleteStagingDir (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDropPartitionEventWithPartitionOnTimestampColumn (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDrops (batchId=226) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDropsWithCM (batchId=226)
[jira] [Commented] (HIVE-14498) Freshness period for query rewriting using materialized views
[ https://issues.apache.org/jira/browse/HIVE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290161#comment-16290161 ] Hive QA commented on HIVE-14498: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 17s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 6s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} common: The patch generated 4 new + 942 unchanged - 0 fixed = 946 total (was 942) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s{color} | {color:red} standalone-metastore: The patch generated 49 new + 3479 unchanged - 6 fixed = 3528 total (was 3485) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 55s{color} | {color:red} ql: The patch generated 8 new + 2476 unchanged - 11 fixed = 2484 total (was 2487) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 40s{color} | {color:red} root: The patch generated 61 new + 7109 unchanged - 17 fixed = 7170 total (was 7126) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 175 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 3s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 57s{color} | {color:red} standalone-metastore generated 2 new + 54 unchanged - 0 fixed = 56 total (was 54) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 5m 32s{color} | {color:red} root generated 2 new + 329 unchanged - 0 fixed = 331 total (was 329) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 58m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 8ab523b | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8228/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8228/yetus/diff-checkstyle-standalone-metastore.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8228/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8228/yetus/diff-checkstyle-root.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-8228/yetus/whitespace-eol.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-8228/yetus/whitespace-tabs.txt | | javadoc |
[jira] [Updated] (HIVE-15393) Update Guava version
[ https://issues.apache.org/jira/browse/HIVE-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-15393: -- Attachment: HIVE-15393.7.patch fix merge issues > Update Guava version > > > Key: HIVE-15393 > URL: https://issues.apache.org/jira/browse/HIVE-15393 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.2.0 >Reporter: slim bouguerra >Assignee: Ashutosh Chauhan >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-15393.2.patch, HIVE-15393.3.patch, > HIVE-15393.5.patch, HIVE-15393.6.patch, HIVE-15393.7.patch, HIVE-15393.patch > > > Druid base code is using newer version of guava 16.0.1 that is not compatible > with the current version used by Hive. > FYI Hadoop project is moving to Guava 18 not sure if it is better to move to > guava 18 or even 19. > https://issues.apache.org/jira/browse/HADOOP-10101 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18271) Druid Insert into fails with exception when committing files
[ https://issues.apache.org/jira/browse/HIVE-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290136#comment-16290136 ] Ashutosh Chauhan commented on HIVE-18271: - +1 pending tests > Druid Insert into fails with exception when committing files > > > Key: HIVE-18271 > URL: https://issues.apache.org/jira/browse/HIVE-18271 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa > Fix For: 3.0.0 > > Attachments: HIVE-18271.2.patch, HIVE-18271.patch > > > Exception - > {code} > 03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved)' > org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty > to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved > at org.apache.hadoop.hive.ql.exec.Utilities.rename(Utilities.java:1129) > at > org.apache.hadoop.hive.ql.exec.Utilities.mvFileToFinalPath(Utilities.java:1460) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1135) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:765) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:770) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:588) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:286) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242) > at > org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18201) Disable XPROD_EDGE for sq_count_check() created for scalar subqueries
[ https://issues.apache.org/jira/browse/HIVE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290120#comment-16290120 ] Ashutosh Chauhan commented on HIVE-18201: - Ya, config driven is not ideal solution. However, getting this costing done correctly is currently non trivial in our system because this is mostly a runtime costing we have to do, i.e., decide whether shuffling over network + distributed cpu is faster than no network and cpu with lower parallelism. We need to model network, cpu and parallelism in this case. Currently, we mostly do logical costing based on cardinality of different operators. So, we need to make enhancements in our system to model these runtime params. Meanwhile, this patch is step in right direction. It makes it possible to have that switch b/w different edges possible. Next step will be to estimate the threshold automatically using the costing I outlined above. > Disable XPROD_EDGE for sq_count_check() created for scalar subqueries > -- > > Key: HIVE-18201 > URL: https://issues.apache.org/jira/browse/HIVE-18201 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Nita Dembla >Assignee: Ashutosh Chauhan > Attachments: HIVE-18201.1.patch, query6.explain2.out > > > sq_count_check() will either return an error at runtime or a single row. In > case of query6, the subquery has avg() function that should return a single > row. Attaching the explain. > This does not need an x-prod, because it is not useful to shuffle the big > table side for a cross-product against 1 row. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18248: --- Attachment: HIVE-18248.1.patch > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18248.1.patch > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18271) Druid Insert into fails with exception when committing files
[ https://issues.apache.org/jira/browse/HIVE-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-18271: -- Attachment: HIVE-18271.2.patch Patch to avoid the problem by fixing isNativeTable when used during TezTask where operation has not been initialized. > Druid Insert into fails with exception when committing files > > > Key: HIVE-18271 > URL: https://issues.apache.org/jira/browse/HIVE-18271 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa > Fix For: 3.0.0 > > Attachments: HIVE-18271.2.patch, HIVE-18271.patch > > > Exception - > {code} > 03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved)' > org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty > to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved > at org.apache.hadoop.hive.ql.exec.Utilities.rename(Utilities.java:1129) > at > org.apache.hadoop.hive.ql.exec.Utilities.mvFileToFinalPath(Utilities.java:1460) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1135) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:765) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:770) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:588) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:286) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242) > at > org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-15393) Update Guava version
[ https://issues.apache.org/jira/browse/HIVE-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290099#comment-16290099 ] Hive QA commented on HIVE-15393: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901911/HIVE-15393.6.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8227/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8227/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8227/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-12-13 23:40:21.210 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-8227/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-12-13 23:40:21.213 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 8ab523b HIVE-18241: Query with LEFT SEMI JOIN producing wrong result (Vineet Garg, reviewed by Jesus Camacho Rodriguez) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 8ab523b HIVE-18241: Query with LEFT SEMI JOIN producing wrong result (Vineet Garg, reviewed by Jesus Camacho Rodriguez) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-12-13 23:40:21.700 + rm -rf ../yetus + mkdir ../yetus + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-8227/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch Going to apply patch with: git apply -p0 + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: protoc version: 250, detected platform: linux/amd64 protoc-jar: executing: [/tmp/protoc6472627220939252810.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g [ERROR] Failed to execute goal org.apache.maven.plugins:maven-dependency-plugin:2.8:copy (copy-guava-14) on project spark-client: Either artifact or artifactItems is required -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :spark-client + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12901911 - PreCommit-HIVE-Build > Update Guava version > > > Key: HIVE-15393 > URL:
[jira] [Updated] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18248: --- Attachment: (was: HIVE-18248.1.patch) > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18270) count(distinct) using join and group by produce incorrect output when hive.auto.convert.join=false and hive.auto.convert.join.noconditionaltask=false
[ https://issues.apache.org/jira/browse/HIVE-18270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290098#comment-16290098 ] Hive QA commented on HIVE-18270: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901888/HIVE-18270.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8226/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8226/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8226/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-12-13 23:38:35.723 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-8226/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-12-13 23:38:35.725 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 8ab523b HIVE-18241: Query with LEFT SEMI JOIN producing wrong result (Vineet Garg, reviewed by Jesus Camacho Rodriguez) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 8ab523b HIVE-18241: Query with LEFT SEMI JOIN producing wrong result (Vineet Garg, reviewed by Jesus Camacho Rodriguez) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-12-13 23:38:40.512 + rm -rf ../yetus + mkdir ../yetus + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-8226/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/ReduceSinkDeDuplication.java:185 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/ReduceSinkDeDuplication.java' with conflicts. Going to apply patch with: git apply -p0 error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/ReduceSinkDeDuplication.java:185 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/ReduceSinkDeDuplication.java' with conflicts. U ql/src/java/org/apache/hadoop/hive/ql/optimizer/correlation/ReduceSinkDeDuplication.java + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12901888 - PreCommit-HIVE-Build > count(distinct) using join and group by produce incorrect output when > hive.auto.convert.join=false and > hive.auto.convert.join.noconditionaltask=false > - > > Key: HIVE-18270 > URL: https://issues.apache.org/jira/browse/HIVE-18270 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1, 2.1.1, 2.2.0, 2.3.0 >Reporter: Zac Zhou >Assignee: Zac Zhou > Attachments: HIVE-18270.1.patch > > > When I run the following query: > explain > SELECT foo.id, count(distinct foo.line_id) as factor from > foo JOIN bar ON (foo.id = bar.id) > WHERE foo.orders != 'blah' > group by foo.id; > The following error is got: > java.lang.IndexOutOfBoundsException: Index: 1, Size: 1 > at java.util.ArrayList.rangeCheck(ArrayList.java:635) > at java.util.ArrayList.get(ArrayList.java:411) > at > org.apache.hadoop.hive.ql.optimizer.correlation.ReduceSinkDeDuplication$AbsctractReducerReducerProc.merge(ReduceSinkDeDuplication.java:216) > at >
[jira] [Commented] (HIVE-18271) Druid Insert into fails with exception when committing files
[ https://issues.apache.org/jira/browse/HIVE-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290096#comment-16290096 ] Hive QA commented on HIVE-18271: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901885/HIVE-18271.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 11529 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8225/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8225/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8225/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 17 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12901885 - PreCommit-HIVE-Build > Druid Insert into fails with exception when committing files > > > Key: HIVE-18271 > URL: https://issues.apache.org/jira/browse/HIVE-18271 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa > Fix For: 3.0.0 > > Attachments: HIVE-18271.patch > > > Exception - > {code} > 03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved)' > org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty > to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved > at org.apache.hadoop.hive.ql.exec.Utilities.rename(Utilities.java:1129) > at > org.apache.hadoop.hive.ql.exec.Utilities.mvFileToFinalPath(Utilities.java:1460) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1135) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:765) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:770) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:588) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:286) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) > at >
[jira] [Commented] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290092#comment-16290092 ] Janaki Lahorani commented on HIVE-18248: Some parameters need not change at run time. Also, adds a test that will check the parameters defined as restricted in HiveConf.java. > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18248.1.patch > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18248: --- Attachment: HIVE-18248.1.patch > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18248.1.patch > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18248: --- Fix Version/s: 3.0.0 Status: Patch Available (was: Open) > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18248.1.patch > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18271) Druid Insert into fails with exception when committing files
[ https://issues.apache.org/jira/browse/HIVE-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290076#comment-16290076 ] Jason Dere commented on HIVE-18271: --- Dug into some of the details with [~ashutoshc] and [~bslim]. The FileSinkOperator should not even be trying to call Utilities.mvToFinalPath(), because it is a non-native table - there is actually logic in place for that. The problem is that FileSinkOperator.isNativeTable is dependent on being set during initializeOp() .. and it appears that initializeOp() is never being called for the operators in the TezTask (client side)! Making a patch to fix the immediate problem, by making isNativeTable into a method, that does not depend on initializeOp(). We may want to look into whether the operators within TezTask should have initialize() called on them as a future item. > Druid Insert into fails with exception when committing files > > > Key: HIVE-18271 > URL: https://issues.apache.org/jira/browse/HIVE-18271 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa > Fix For: 3.0.0 > > Attachments: HIVE-18271.patch > > > Exception - > {code} > 03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved)' > org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty > to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved > at org.apache.hadoop.hive.ql.exec.Utilities.rename(Utilities.java:1129) > at > org.apache.hadoop.hive.ql.exec.Utilities.mvFileToFinalPath(Utilities.java:1460) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1135) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:765) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:770) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:588) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:286) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242) > at > org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18272) Fix check-style violations in subquery code
[ https://issues.apache.org/jira/browse/HIVE-18272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18272: --- Status: Patch Available (was: Open) > Fix check-style violations in subquery code > --- > > Key: HIVE-18272 > URL: https://issues.apache.org/jira/browse/HIVE-18272 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18272.1.patch > > > Following files have quite a few checkstyle violations: > {{HiveSubQRemoveRelBuilder.java}} > {{HiveRelDecorrelator.java}} > {{HiveSubQueryRemoveRule.java}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18272) Fix check-style violations in subquery code
[ https://issues.apache.org/jira/browse/HIVE-18272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18272: --- Attachment: HIVE-18272.1.patch > Fix check-style violations in subquery code > --- > > Key: HIVE-18272 > URL: https://issues.apache.org/jira/browse/HIVE-18272 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18272.1.patch > > > Following files have quite a few checkstyle violations: > {{HiveSubQRemoveRelBuilder.java}} > {{HiveRelDecorrelator.java}} > {{HiveSubQueryRemoveRule.java}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18272) Fix check-style violations in subquery code
[ https://issues.apache.org/jira/browse/HIVE-18272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-18272: -- > Fix check-style violations in subquery code > --- > > Key: HIVE-18272 > URL: https://issues.apache.org/jira/browse/HIVE-18272 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg > > Following files have quite a few checkstyle violations: > {{HiveSubQRemoveRelBuilder.java}} > {{HiveRelDecorrelator.java}} > {{HiveSubQueryRemoveRule.java}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18258) Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken
[ https://issues.apache.org/jira/browse/HIVE-18258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18258: Status: Patch Available (was: In Progress) > Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is > broken > - > > Key: HIVE-18258 > URL: https://issues.apache.org/jira/browse/HIVE-18258 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18258.01.patch, HIVE-18258.02.patch, > HIVE-18258.03.patch > > > See Q file. Duplicate columns in key are not handled correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18258) Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken
[ https://issues.apache.org/jira/browse/HIVE-18258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18258: Attachment: HIVE-18258.03.patch > Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is > broken > - > > Key: HIVE-18258 > URL: https://issues.apache.org/jira/browse/HIVE-18258 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18258.01.patch, HIVE-18258.02.patch, > HIVE-18258.03.patch > > > See Q file. Duplicate columns in key are not handled correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18258) Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken
[ https://issues.apache.org/jira/browse/HIVE-18258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18258: Status: In Progress (was: Patch Available) > Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is > broken > - > > Key: HIVE-18258 > URL: https://issues.apache.org/jira/browse/HIVE-18258 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18258.01.patch, HIVE-18258.02.patch > > > See Q file. Duplicate columns in key are not handled correctly. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18271) Druid Insert into fails with exception when committing files
[ https://issues.apache.org/jira/browse/HIVE-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290052#comment-16290052 ] Hive QA commented on HIVE-18271: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s{color} | {color:red} ql: The patch generated 1 new + 184 unchanged - 0 fixed = 185 total (was 184) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 8ab523b | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8225/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8225/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Druid Insert into fails with exception when committing files > > > Key: HIVE-18271 > URL: https://issues.apache.org/jira/browse/HIVE-18271 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa > Fix For: 3.0.0 > > Attachments: HIVE-18271.patch > > > Exception - > {code} > 03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved)' > org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty > to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved > at org.apache.hadoop.hive.ql.exec.Utilities.rename(Utilities.java:1129) > at > org.apache.hadoop.hive.ql.exec.Utilities.mvFileToFinalPath(Utilities.java:1460) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1135) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:765) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:770) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:588) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:286) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at
[jira] [Commented] (HIVE-18052) Run p-tests on mm tables
[ https://issues.apache.org/jira/browse/HIVE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290041#comment-16290041 ] Hive QA commented on HIVE-18052: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 35s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 46s{color} | {color:red} ql: The patch generated 6 new + 1638 unchanged - 2 fixed = 1644 total (was 1640) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 53s{color} | {color:red} root: The patch generated 6 new + 2757 unchanged - 2 fixed = 2763 total (was 2759) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 4m 19s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 2s{color} | {color:red} hcatalog-unit in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 1s{color} | {color:red} hive-minikdc in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 2s{color} | {color:red} hive-unit in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue} 0m 3s{color} | {color:blue} ASF License check generated no output? {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 8ab523b | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8223/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8223/yetus/diff-checkstyle-root.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-8223/yetus/whitespace-eol.txt | | javadoc | http://104.198.109.242/logs//PreCommit-HIVE-Build-8223/yetus/patch-javadoc-root.txt | | javadoc | http://104.198.109.242/logs//PreCommit-HIVE-Build-8223/yetus/patch-javadoc-itests_hcatalog-unit.txt | | javadoc | http://104.198.109.242/logs//PreCommit-HIVE-Build-8223/yetus/patch-javadoc-itests_hive-minikdc.txt | | javadoc | http://104.198.109.242/logs//PreCommit-HIVE-Build-8223/yetus/patch-javadoc-itests_hive-unit.txt | | modules | C: common standalone-metastore ql service hcatalog/core hcatalog/hcatalog-pig-adapter hcatalog/server-extensions hcatalog/webhcat/java-client hcatalog/streaming . itests/hcatalog-unit
[jira] [Commented] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290040#comment-16290040 ] Hive QA commented on HIVE-18268: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901861/HIVE-18268.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8224/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8224/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8224/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-12-13 22:21:56.390 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-8224/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-12-13 22:21:56.394 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 8ab523b HIVE-18241: Query with LEFT SEMI JOIN producing wrong result (Vineet Garg, reviewed by Jesus Camacho Rodriguez) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 8ab523b HIVE-18241: Query with LEFT SEMI JOIN producing wrong result (Vineet Garg, reviewed by Jesus Camacho Rodriguez) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-12-13 22:22:02.189 + rm -rf ../yetus rm: cannot remove ?../yetus/itests?: Directory not empty + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12901861 - PreCommit-HIVE-Build > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18078) WM getSession needs some retry logic
[ https://issues.apache.org/jira/browse/HIVE-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289980#comment-16289980 ] Sergey Shelukhin commented on HIVE-18078: - Will rebase this after HIVE-18153 is in > WM getSession needs some retry logic > > > Key: HIVE-18078 > URL: https://issues.apache.org/jira/browse/HIVE-18078 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18078.01.patch, HIVE-18078.01.patch, > HIVE-18078.02.patch, HIVE-18078.03.patch, HIVE-18078.only.patch, > HIVE-18078.patch > > > When we get a bad session (e.g. no registry info because AM has gone > catatonic), the failure by the timeout future fails the getSession call. > The retry model in TezTask is that it would get a session (which in original > model can be completely unusable, but we still get the object), and then > retry (reopen) if it's a lemon. If the reopen fails, we fail. > getSession is not covered by this retry scheme, and should thus do its own > retries (or the retry logic needs to be changed) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18003) add explicit jdbc connection string args for mappings
[ https://issues.apache.org/jira/browse/HIVE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18003: Attachment: (was: HIVE-18153.04.patch) > add explicit jdbc connection string args for mappings > - > > Key: HIVE-18003 > URL: https://issues.apache.org/jira/browse/HIVE-18003 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18003.01.patch, HIVE-18003.02.patch, > HIVE-18003.03.patch, HIVE-18003.04.patch, HIVE-18003.patch > > > 1) Force using unmanaged/containers execution. > 2) Optional - specify pool name (config setting to gate this, disabled by > default?). > In phase 2 (or 4?) we might allow #2 to be used by a user to choose between > multiple mappings if they have multiple pools they could be mapped to (i.e. > to change the ordering essentially). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18003) add explicit jdbc connection string args for mappings
[ https://issues.apache.org/jira/browse/HIVE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18003: Attachment: HIVE-18003.04.patch Attached the wrong patch > add explicit jdbc connection string args for mappings > - > > Key: HIVE-18003 > URL: https://issues.apache.org/jira/browse/HIVE-18003 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18003.01.patch, HIVE-18003.02.patch, > HIVE-18003.03.patch, HIVE-18003.04.patch, HIVE-18003.patch > > > 1) Force using unmanaged/containers execution. > 2) Optional - specify pool name (config setting to gate this, disabled by > default?). > In phase 2 (or 4?) we might allow #2 to be used by a user to choose between > multiple mappings if they have multiple pools they could be mapped to (i.e. > to change the ordering essentially). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18230) create plan like plan, and replace plan commands for easy modification
[ https://issues.apache.org/jira/browse/HIVE-18230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289974#comment-16289974 ] Sergey Shelukhin commented on HIVE-18230: - Errors are unrelated > create plan like plan, and replace plan commands for easy modification > -- > > Key: HIVE-18230 > URL: https://issues.apache.org/jira/browse/HIVE-18230 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18230.only.nogen.patch, HIVE-18230.patch > > > Given that the plan already on the cluster cannot be altered, it would be > helpful to have create plan like plan, and replace plan commands that would > make a copy to be modified, and then rename+apply the copy in place of an > existing plan, and rename the existing active plan with a versioned name or > drop it altogether. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18203) change the way WM is enabled and allow dropping the last resource plan
[ https://issues.apache.org/jira/browse/HIVE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289972#comment-16289972 ] Sergey Shelukhin commented on HIVE-18203: - Errors are unrelated (broken in other jiras too) or stuff like OOM and timeouts in unrelated tests (e.g. testSsl) > change the way WM is enabled and allow dropping the last resource plan > -- > > Key: HIVE-18203 > URL: https://issues.apache.org/jira/browse/HIVE-18203 > Project: Hive > Issue Type: Sub-task >Reporter: Aswathy Chellammal Sreekumar >Assignee: Sergey Shelukhin > Attachments: HIVE-18203.01.patch, HIVE-18203.02.patch, > HIVE-18203.03.patch, HIVE-18203.patch > > > Currently it's impossible to drop the last active resource plan even if WM is > disabled. It should be possible to deactivate the last resource plan AND > disable WM in the same action. Activating a resource plan should enable WM in > this case. > This should interact with the WM queue config in a sensible manner. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18221) test acid default
[ https://issues.apache.org/jira/browse/HIVE-18221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289919#comment-16289919 ] Hive QA commented on HIVE-18221: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901815/HIVE-18221.10.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 435 failed/errored test(s), 9752 tests executed *Failed tests:* {noformat} TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93)
[jira] [Commented] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289886#comment-16289886 ] Sergey Shelukhin commented on HIVE-18269: - Is this actually going to work? Seems like the sync block inside which take and put are happening will block each other, so if one blocks the other cannot enter and unblock the first. > LLAP: Fast llap io with slow processing pipeline can lead to OOM > > > Key: HIVE-18269 > URL: https://issues.apache.org/jira/browse/HIVE-18269 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18269.1.patch, Screen Shot 2017-12-13 at 1.15.16 > AM.png > > > pendingData linked list in Llap IO elevator (LlapRecordReader.java) may grow > indefinitely when Llap IO is faster than processing pipeline. Since we don't > have backpressure to slow down the IO, this can lead to indefinite growth of > pending data leading to severe GC pressure and eventually lead to OOM. > This specific instance of LLAP was running on HDFS on top of EBS volume > backed by SSD. The query that triggered this is issue was ANALYZE STATISTICS > .. FOR COLUMNS which also gather bitvectors. Fast IO and Slow processing case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289886#comment-16289886 ] Sergey Shelukhin edited comment on HIVE-18269 at 12/13/17 8:51 PM: --- Is this actually going to work? Seems like the sync blocks inside which take and put are happening will obstruct each other, so if one blocks on the queue, the other cannot enter and unblock the first. was (Author: sershe): Is this actually going to work? Seems like the sync blocks inside which take and put are happening will obstruct each other, so if one blocks the other cannot enter and unblock the first. > LLAP: Fast llap io with slow processing pipeline can lead to OOM > > > Key: HIVE-18269 > URL: https://issues.apache.org/jira/browse/HIVE-18269 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18269.1.patch, Screen Shot 2017-12-13 at 1.15.16 > AM.png > > > pendingData linked list in Llap IO elevator (LlapRecordReader.java) may grow > indefinitely when Llap IO is faster than processing pipeline. Since we don't > have backpressure to slow down the IO, this can lead to indefinite growth of > pending data leading to severe GC pressure and eventually lead to OOM. > This specific instance of LLAP was running on HDFS on top of EBS volume > backed by SSD. The query that triggered this is issue was ANALYZE STATISTICS > .. FOR COLUMNS which also gather bitvectors. Fast IO and Slow processing case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289886#comment-16289886 ] Sergey Shelukhin edited comment on HIVE-18269 at 12/13/17 8:51 PM: --- Is this actually going to work? Seems like the sync blocks inside which take and put are happening will obstruct each other, so if one blocks the other cannot enter and unblock the first. was (Author: sershe): Is this actually going to work? Seems like the sync block inside which take and put are happening will block each other, so if one blocks the other cannot enter and unblock the first. > LLAP: Fast llap io with slow processing pipeline can lead to OOM > > > Key: HIVE-18269 > URL: https://issues.apache.org/jira/browse/HIVE-18269 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18269.1.patch, Screen Shot 2017-12-13 at 1.15.16 > AM.png > > > pendingData linked list in Llap IO elevator (LlapRecordReader.java) may grow > indefinitely when Llap IO is faster than processing pipeline. Since we don't > have backpressure to slow down the IO, this can lead to indefinite growth of > pending data leading to severe GC pressure and eventually lead to OOM. > This specific instance of LLAP was running on HDFS on top of EBS volume > backed by SSD. The query that triggered this is issue was ANALYZE STATISTICS > .. FOR COLUMNS which also gather bitvectors. Fast IO and Slow processing case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18221) test acid default
[ https://issues.apache.org/jira/browse/HIVE-18221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289842#comment-16289842 ] Hive QA commented on HIVE-18221: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} standalone-metastore: The patch generated 8 new + 209 unchanged - 0 fixed = 217 total (was 209) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s{color} | {color:red} ql: The patch generated 4 new + 668 unchanged - 0 fixed = 672 total (was 668) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / b7be4ac | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8221/yetus/diff-checkstyle-standalone-metastore.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8221/yetus/diff-checkstyle-ql.txt | | modules | C: common standalone-metastore ql hcatalog/hcatalog-pig-adapter U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8221/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > test acid default > - > > Key: HIVE-18221 > URL: https://issues.apache.org/jira/browse/HIVE-18221 > Project: Hive > Issue Type: Test > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18221.01.patch, HIVE-18221.02.patch, > HIVE-18221.03.patch, HIVE-18221.04.patch, HIVE-18221.07.patch, > HIVE-18221.08.patch, HIVE-18221.09.patch, HIVE-18221.10.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18241) Query with LEFT SEMI JOIN producing wrong result
[ https://issues.apache.org/jira/browse/HIVE-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18241: --- Resolution: Fixed Status: Resolved (was: Patch Available) Thanks for reviewing [~jcamachorodriguez]. Pushed this to master. > Query with LEFT SEMI JOIN producing wrong result > > > Key: HIVE-18241 > URL: https://issues.apache.org/jira/browse/HIVE-18241 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg > Attachments: HIVE-18241.1.patch, HIVE-18241.2.patch, > HIVE-18241.3.patch > > > Following query produces wrong result > {code:sql} > select key, value from src outr left semi join (select a.key, b.value from > src a join (select distinct value from src) b on a.value > b.value group by > a.key, b.value) inr on outr.key=inr.key and outr.value=inr.value; > {code} > Expected result is empty set but it output bunch of rows. > Schema for {{src}} table could be find in {{data/scripts/q_test_init.sql}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18271) Druid Insert into fails with exception when committing files
[ https://issues.apache.org/jira/browse/HIVE-18271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289799#comment-16289799 ] Jason Dere commented on HIVE-18271: --- Agree with the patch .. the logic should ensure the new tmpPath is a unique FS path, and since Utilities.mvFileToFinalPath() has renamed the original tmpPath, then anything that tried to delete the original path later would not work because it has been renamed to tmpPath.moved. So there might need to be cleanup of tmpPath if we are renaming it here. One thing - can you move the call to fs.delete(tmpPath, true) down one line, to just after the if/else statement? That way tmpPath gets cleaned up for both BlobStore/non-BlobStore FS cases. And maybe add a comment to the fs.delete() call - reference this use case or Jira. > Druid Insert into fails with exception when committing files > > > Key: HIVE-18271 > URL: https://issues.apache.org/jira/browse/HIVE-18271 > Project: Hive > Issue Type: Bug >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa > Fix For: 3.0.0 > > Attachments: HIVE-18271.patch > > > Exception - > {code} > 03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved)' > org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty > to: > hdfs://ctr-e136-1513029738776-2163-01-03.hwx.site:8020/apps/hive/warehouse/_tmp.all100k_druid_initial_empty.moved > at org.apache.hadoop.hive.ql.exec.Utilities.rename(Utilities.java:1129) > at > org.apache.hadoop.hive.ql.exec.Utilities.mvFileToFinalPath(Utilities.java:1460) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1135) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:765) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:770) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:588) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:286) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1211) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1204) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242) > at > org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:350) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18125) Support arbitrary file names in input to Load Data
[ https://issues.apache.org/jira/browse/HIVE-18125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289798#comment-16289798 ] Hive QA commented on HIVE-18125: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901814/HIVE-18125.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 11530 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[skewjoinopt11] (batchId=71) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=178) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[load_data_into_acid] (batchId=92) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8220/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8220/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8220/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 19 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12901814 - PreCommit-HIVE-Build > Support arbitrary file names in input to Load Data > -- > > Key: HIVE-18125 > URL: https://issues.apache.org/jira/browse/HIVE-18125 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18125.01.patch > > > HIVE-17361 only allows 0_0 and _0_copy_1. Should it support > arbitrary names? > If so, should it sort them and rename _0, 0001_0, etc? > This is probably a lot easier than changing the whole code base to assign > proper 'bucket' (writerId) everywhere Acid reads such file. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289779#comment-16289779 ] Prasanth Jayachandran commented on HIVE-18269: -- [~sershe] can you please take a look? > LLAP: Fast llap io with slow processing pipeline can lead to OOM > > > Key: HIVE-18269 > URL: https://issues.apache.org/jira/browse/HIVE-18269 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18269.1.patch, Screen Shot 2017-12-13 at 1.15.16 > AM.png > > > pendingData linked list in Llap IO elevator (LlapRecordReader.java) may grow > indefinitely when Llap IO is faster than processing pipeline. Since we don't > have backpressure to slow down the IO, this can lead to indefinite growth of > pending data leading to severe GC pressure and eventually lead to OOM. > This specific instance of LLAP was running on HDFS on top of EBS volume > backed by SSD. The query that triggered this is issue was ANALYZE STATISTICS > .. FOR COLUMNS which also gather bitvectors. Fast IO and Slow processing case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18269: - Status: Patch Available (was: Open) > LLAP: Fast llap io with slow processing pipeline can lead to OOM > > > Key: HIVE-18269 > URL: https://issues.apache.org/jira/browse/HIVE-18269 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18269.1.patch, Screen Shot 2017-12-13 at 1.15.16 > AM.png > > > pendingData linked list in Llap IO elevator (LlapRecordReader.java) may grow > indefinitely when Llap IO is faster than processing pipeline. Since we don't > have backpressure to slow down the IO, this can lead to indefinite growth of > pending data leading to severe GC pressure and eventually lead to OOM. > This specific instance of LLAP was running on HDFS on top of EBS volume > backed by SSD. The query that triggered this is issue was ANALYZE STATISTICS > .. FOR COLUMNS which also gather bitvectors. Fast IO and Slow processing case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18269: - Attachment: HIVE-18269.1.patch Haven't tested the patch yet on the repro cluster. Cluster is busy right now. Will test it on free time. > LLAP: Fast llap io with slow processing pipeline can lead to OOM > > > Key: HIVE-18269 > URL: https://issues.apache.org/jira/browse/HIVE-18269 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-18269.1.patch, Screen Shot 2017-12-13 at 1.15.16 > AM.png > > > pendingData linked list in Llap IO elevator (LlapRecordReader.java) may grow > indefinitely when Llap IO is faster than processing pipeline. Since we don't > have backpressure to slow down the IO, this can lead to indefinite growth of > pending data leading to severe GC pressure and eventually lead to OOM. > This specific instance of LLAP was running on HDFS on top of EBS volume > backed by SSD. The query that triggered this is issue was ANALYZE STATISTICS > .. FOR COLUMNS which also gather bitvectors. Fast IO and Slow processing case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18228) Azure credential properties should be added to the HiveConf hidden list
[ https://issues.apache.org/jira/browse/HIVE-18228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289774#comment-16289774 ] Andrew Sherman commented on HIVE-18228: --- Thanks [~pvary] for the commit and the suggestion about documentation. > Azure credential properties should be added to the HiveConf hidden list > --- > > Key: HIVE-18228 > URL: https://issues.apache.org/jira/browse/HIVE-18228 > Project: Hive > Issue Type: Bug >Reporter: Andrew Sherman >Assignee: Andrew Sherman > Labels: TODOC3.0 > Fix For: 3.0.0 > > Attachments: HIVE-18228.1.patch, HIVE-18228.2.patch, > HIVE-18228.3.patch > > > The HIVE_CONF_HIDDEN_LIST("hive.conf.hidden.list") already contains keys > contaiing aws credentials. The Azure properties to be added are: > * dfs.adls.oauth2.credential > * fs.adl.oauth2.credential -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18201) Disable XPROD_EDGE for sq_count_check() created for scalar subqueries
[ https://issues.apache.org/jira/browse/HIVE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289752#comment-16289752 ] Gunther Hagleitner commented on HIVE-18201: --- The config introduced is very hard to set right. Default is 1 row. I'm pretty sure the same logic applies to 2 rows, probably for 10 and 100, maybe not for 1000? Would be nice to try to get this right in the planner - not possible? > Disable XPROD_EDGE for sq_count_check() created for scalar subqueries > -- > > Key: HIVE-18201 > URL: https://issues.apache.org/jira/browse/HIVE-18201 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Nita Dembla >Assignee: Ashutosh Chauhan > Attachments: HIVE-18201.1.patch, query6.explain2.out > > > sq_count_check() will either return an error at runtime or a single row. In > case of query6, the subquery has avg() function that should return a single > row. Attaching the explain. > This does not need an x-prod, because it is not useful to shuffle the big > table side for a cross-product against 1 row. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18125) Support arbitrary file names in input to Load Data
[ https://issues.apache.org/jira/browse/HIVE-18125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289741#comment-16289741 ] Hive QA commented on HIVE-18125: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s{color} | {color:red} ql: The patch generated 3 new + 354 unchanged - 0 fixed = 357 total (was 354) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / b7be4ac | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8220/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8220/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Support arbitrary file names in input to Load Data > -- > > Key: HIVE-18125 > URL: https://issues.apache.org/jira/browse/HIVE-18125 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18125.01.patch > > > HIVE-17361 only allows 0_0 and _0_copy_1. Should it support > arbitrary names? > If so, should it sort them and rename _0, 0001_0, etc? > This is probably a lot easier than changing the whole code base to assign > proper 'bucket' (writerId) everywhere Acid reads such file. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18132) Acid Table: Exchange Partition
[ https://issues.apache.org/jira/browse/HIVE-18132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18132: -- Issue Type: Sub-task (was: New Feature) Parent: HIVE-17339 > Acid Table: Exchange Partition > -- > > Key: HIVE-18132 > URL: https://issues.apache.org/jira/browse/HIVE-18132 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman > > This command currently renames a directory under 1 table to be under another > table's name space. This can't work for Acid since data itself has embedded > transaction info. > If src is not full acid, it could be added to target side like Load Data - > into delta/base, but if source side is also Acid the IDs won't make sense in > target table. It could match if using global txn ids but may not match with > per table write ids if some ID from src is committed but the same ID in > target is aborted. > Does this command currently work with bucketed tables? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17339) Acid feature parity laundry list
[ https://issues.apache.org/jira/browse/HIVE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-17339: -- Description: 1. insert into T select - this can sometimes use DISTCP (hive.exec.copyfile.maxsize). What does this mean for acid? 2. Exchange Partition - HIVE-18132 was: 1. insert into T select - this can sometimes use DISTCP (hive.exec.copyfile.maxsize). What does this mean for acid? 2. > Acid feature parity laundry list > > > Key: HIVE-17339 > URL: https://issues.apache.org/jira/browse/HIVE-17339 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > 1. insert into T select - this can sometimes use DISTCP > (hive.exec.copyfile.maxsize). What does this mean for acid? > 2. Exchange Partition - HIVE-18132 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character
[ https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289711#comment-16289711 ] Hive QA commented on HIVE-18265: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901724/HIVE-18265.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11529 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testApplyPlanQpChanges (batchId=285) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveBackKill (batchId=236) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8219/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8219/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8219/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 20 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12901724 - PreCommit-HIVE-Build > desc formatted/extended or show create table can not fully display the result > when field or table comment contains tab character > > > Key: HIVE-18265 > URL: https://issues.apache.org/jira/browse/HIVE-18265 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 3.0.0 >Reporter: Hui Huang >Assignee: Hui Huang > Fix For: 3.0.0 > > Attachments: HIVE-18265.patch > > > Here are some examples: > create table test_comment (id1 string comment 'full_\tname1', id2 string > comment 'full_\tname2', id3 string comment 'full_\tname3') stored as textfile; > When execute `show create table test_comment`, we can see the following > content in the console, > {quote} > createtab_stmt > CREATE TABLE `test_comment`( > `id1` string COMMENT 'full_ > `id2` string COMMENT 'full_ > `id3` string COMMENT 'full_ > ROW FORMAT SERDE > 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' > STORED AS INPUTFORMAT > 'org.apache.hadoop.mapred.TextInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' > LOCATION > 'hdfs://xxx/user/huanghui/warehouse/huanghuitest.db/test_comment' > TBLPROPERTIES ( > 'transient_lastDdlTime'='1513095570') > {quote} > And the output of `desc formatted table ` is a little similar, > {quote} > col_name data_type comment > \# col_name data_type comment > id1 string full_ > id2 string full_ > id3 string full_ > \# Detailed Table Information > (ignore)... > {quote} > When execute `desc extended
[jira] [Commented] (HIVE-18270) count(distinct) using join and group by produce incorrect output when hive.auto.convert.join=false and hive.auto.convert.join.noconditionaltask=false
[ https://issues.apache.org/jira/browse/HIVE-18270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289695#comment-16289695 ] Ashutosh Chauhan commented on HIVE-18270: - [~yuan_zac] Can you please also add a testcase? > count(distinct) using join and group by produce incorrect output when > hive.auto.convert.join=false and > hive.auto.convert.join.noconditionaltask=false > - > > Key: HIVE-18270 > URL: https://issues.apache.org/jira/browse/HIVE-18270 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1, 2.1.1, 2.2.0, 2.3.0 >Reporter: Zac Zhou >Assignee: Zac Zhou > Attachments: HIVE-18270.1.patch > > > When I run the following query: > explain > SELECT foo.id, count(distinct foo.line_id) as factor from > foo JOIN bar ON (foo.id = bar.id) > WHERE foo.orders != 'blah' > group by foo.id; > The following error is got: > java.lang.IndexOutOfBoundsException: Index: 1, Size: 1 > at java.util.ArrayList.rangeCheck(ArrayList.java:635) > at java.util.ArrayList.get(ArrayList.java:411) > at > org.apache.hadoop.hive.ql.optimizer.correlation.ReduceSinkDeDuplication$AbsctractReducerReducerProc.merge(ReduceSinkDeDuplication.java:216) > at > org.apache.hadoop.hive.ql.optimizer.correlation.ReduceSinkDeDuplication$JoinReducerProc.process(ReduceSinkDeDuplication.java:557) > at > org.apache.hadoop.hive.ql.optimizer.correlation.ReduceSinkDeDuplication$AbsctractReducerReducerProc.process(ReduceSinkDeDuplication.java:166) > at > org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:133) > at > org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110) > at > org.apache.hadoop.hive.ql.optimizer.correlation.ReduceSinkDeDuplication.transform(ReduceSinkDeDuplication.java:108) > at > org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:192) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10201) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:209) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at > org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > It looks like it is a bug of ReduceSinkDeDuplication optimizer. > Since the columns of count distinct need to be added into reduce key for > sorting, the reducesink of group can't be replaced with the ones of join. > In the case of count distinct query, reducesink of group should not be merged > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character
[ https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289665#comment-16289665 ] Andrew Sherman commented on HIVE-18265: --- Is it possible to add a test for this? Thanks > desc formatted/extended or show create table can not fully display the result > when field or table comment contains tab character > > > Key: HIVE-18265 > URL: https://issues.apache.org/jira/browse/HIVE-18265 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 3.0.0 >Reporter: Hui Huang >Assignee: Hui Huang > Fix For: 3.0.0 > > Attachments: HIVE-18265.patch > > > Here are some examples: > create table test_comment (id1 string comment 'full_\tname1', id2 string > comment 'full_\tname2', id3 string comment 'full_\tname3') stored as textfile; > When execute `show create table test_comment`, we can see the following > content in the console, > {quote} > createtab_stmt > CREATE TABLE `test_comment`( > `id1` string COMMENT 'full_ > `id2` string COMMENT 'full_ > `id3` string COMMENT 'full_ > ROW FORMAT SERDE > 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' > STORED AS INPUTFORMAT > 'org.apache.hadoop.mapred.TextInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' > LOCATION > 'hdfs://xxx/user/huanghui/warehouse/huanghuitest.db/test_comment' > TBLPROPERTIES ( > 'transient_lastDdlTime'='1513095570') > {quote} > And the output of `desc formatted table ` is a little similar, > {quote} > col_name data_type comment > \# col_name data_type comment > id1 string full_ > id2 string full_ > id3 string full_ > \# Detailed Table Information > (ignore)... > {quote} > When execute `desc extended test_comment`, the problem is more obvious, > {quote} > col_name data_type comment > id1 string full_ > id2 string full_ > id3 string full_ > Detailed Table InformationTable(tableName:test_comment, > dbName:huanghuitest, owner:huanghui, createTime:1513095570, lastAccessTime:0, > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id1, type:string, > comment:full_name1), FieldSchema(name:id2, type:string, comment:full_ > {quote} > *the rest of the content is lost*. > The content is not really lost, it's just can not display normal. Because > hive store the result in LazyStruct, and LazyStruct use '\t' as field > separator: > {code:java} > // LazyStruct.java#parse() > // Go through all bytes in the byte[] > while (fieldByteEnd <= structByteEnd) { > if (fieldByteEnd == structByteEnd || bytes[fieldByteEnd] == separator) { > // Reached the end of a field? > if (lastColumnTakesRest && fieldId == fields.length - 1) { > fieldByteEnd = structByteEnd; > } > startPosition[fieldId] = fieldByteBegin; > fieldId++; > if (fieldId == fields.length || fieldByteEnd == structByteEnd) { > // All fields have been parsed, or bytes have been parsed. > // We need to set the startPosition of fields.length to ensure we > // can use the same formula to calculate the length of each field. > // For missing fields, their starting positions will all be the > same, > // which will make their lengths to be -1 and uncheckedGetField will > // return these fields as NULLs. > for (int i = fieldId; i <= fields.length; i++) { > startPosition[i] = fieldByteEnd + 1; > } > break; > } > fieldByteBegin = fieldByteEnd + 1; > fieldByteEnd++; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18269: - Description: pendingData linked list in Llap IO elevator (LlapRecordReader.java) may grow indefinitely when Llap IO is faster than processing pipeline. Since we don't have backpressure to slow down the IO, this can lead to indefinite growth of pending data leading to severe GC pressure and eventually lead to OOM. This specific instance of LLAP was running on HDFS on top of EBS volume backed by SSD. The query that triggered this is issue was ANALYZE STATISTICS .. FOR COLUMNS which also gather bitvectors. Fast IO and Slow processing case. was:pendingData linked list in Llap IO elevator (LlapRecordReader.java) may have grow indefinitely when Llap IO is faster than processing pipeline. Since we don't have backpressure to slow down the IO, this can lead to indefinite growth of pending data leading to severe GC pressure and eventually lead to OOM. > LLAP: Fast llap io with slow processing pipeline can lead to OOM > > > Key: HIVE-18269 > URL: https://issues.apache.org/jira/browse/HIVE-18269 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: Screen Shot 2017-12-13 at 1.15.16 AM.png > > > pendingData linked list in Llap IO elevator (LlapRecordReader.java) may grow > indefinitely when Llap IO is faster than processing pipeline. Since we don't > have backpressure to slow down the IO, this can lead to indefinite growth of > pending data leading to severe GC pressure and eventually lead to OOM. > This specific instance of LLAP was running on HDFS on top of EBS volume > backed by SSD. The query that triggered this is issue was ANALYZE STATISTICS > .. FOR COLUMNS which also gather bitvectors. Fast IO and Slow processing case. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17710) LockManager should only lock Managed tables
[ https://issues.apache.org/jira/browse/HIVE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-17710: -- Resolution: Fixed Fix Version/s: 3.0.0 Release Note: The LockManager which is installed automatically when enabling Acid (hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager) does not lock any External Tables since Hive has no control over what is modifying the data in such tables. Status: Resolved (was: Patch Available) no related failures committed to master (fixed checkstyle nags) thanks Alan for the rewiew > LockManager should only lock Managed tables > --- > > Key: HIVE-17710 > URL: https://issues.apache.org/jira/browse/HIVE-17710 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Fix For: 3.0.0 > > Attachments: HIVE-17710.01.patch, HIVE-17710.02.patch, > HIVE-17710.03.patch, HIVE-17710.04.patch, HIVE-17710.04.patch > > > should the LM take locks on External tables? Out of the box Acid LM is being > conservative which can cause throughput issues. > A better strategy may be to exclude External tables but enable explicit "lock > table/partition " command (only on external tables?). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character
[ https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289641#comment-16289641 ] Hive QA commented on HIVE-18265: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 3 new + 1415 unchanged - 0 fixed = 1418 total (was 1415) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 7ea263c | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8219/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8219/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > desc formatted/extended or show create table can not fully display the result > when field or table comment contains tab character > > > Key: HIVE-18265 > URL: https://issues.apache.org/jira/browse/HIVE-18265 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 3.0.0 >Reporter: Hui Huang >Assignee: Hui Huang > Fix For: 3.0.0 > > Attachments: HIVE-18265.patch > > > Here are some examples: > create table test_comment (id1 string comment 'full_\tname1', id2 string > comment 'full_\tname2', id3 string comment 'full_\tname3') stored as textfile; > When execute `show create table test_comment`, we can see the following > content in the console, > {quote} > createtab_stmt > CREATE TABLE `test_comment`( > `id1` string COMMENT 'full_ > `id2` string COMMENT 'full_ > `id3` string COMMENT 'full_ > ROW FORMAT SERDE > 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' > STORED AS INPUTFORMAT > 'org.apache.hadoop.mapred.TextInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' > LOCATION > 'hdfs://xxx/user/huanghui/warehouse/huanghuitest.db/test_comment' > TBLPROPERTIES ( > 'transient_lastDdlTime'='1513095570') > {quote} > And the output of `desc formatted table ` is a little similar, > {quote} > col_name data_type comment > \# col_name data_type comment > id1 string full_ > id2 string full_ > id3 string
[jira] [Commented] (HIVE-18269) LLAP: Fast llap io with slow processing pipeline can lead to OOM
[ https://issues.apache.org/jira/browse/HIVE-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289635#comment-16289635 ] Sergey Shelukhin commented on HIVE-18269: - Interesting... CV memory use may be hard to estimate. Maybe the backpressure can be based on list length, start at relatively low value, and then if backpressure was triggered before and the list has emptied (causing operators to wait) the value would go up? > LLAP: Fast llap io with slow processing pipeline can lead to OOM > > > Key: HIVE-18269 > URL: https://issues.apache.org/jira/browse/HIVE-18269 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: Screen Shot 2017-12-13 at 1.15.16 AM.png > > > pendingData linked list in Llap IO elevator (LlapRecordReader.java) may have > grow indefinitely when Llap IO is faster than processing pipeline. Since we > don't have backpressure to slow down the IO, this can lead to indefinite > growth of pending data leading to severe GC pressure and eventually lead to > OOM. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18124) clean up isAcidTable() API vs isInsertOnlyTable()
[ https://issues.apache.org/jira/browse/HIVE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289618#comment-16289618 ] Eugene Koifman commented on HIVE-18124: --- no related failures [~alangates] could you review please > clean up isAcidTable() API vs isInsertOnlyTable() > -- > > Key: HIVE-18124 > URL: https://issues.apache.org/jira/browse/HIVE-18124 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18124.01.patch, HIVE-18124.02.patch, > HIVE-18124.03.patch, HIVE-18124.04.patch, HIVE-18124.05.patch > > > With the addition of MM tables (_AcidUtils.isInsertOnlyTable(table)_) the > methods in AcidUtils and dependent places are very muddled. There are now a > number of places where we have something like _isAcidTable = > AcidUtils.isFullAcidTable(table)_ and a later getter > _boolean isAcidTable() \{ return isAcidTable;\}_ > Need to clean it up so that there is a isTransactional(Table) that checks > transactional=true setting and isAcid(Table) to mean full ACID and > isInsertOnly(Table) to mean MM tables. > This would accurately describe the semantics of the tables. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18124) clean up isAcidTable() API vs isInsertOnlyTable()
[ https://issues.apache.org/jira/browse/HIVE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18124: -- Description: With the addition of MM tables (_AcidUtils.isInsertOnlyTable(table)_) the methods in AcidUtils and dependent places are very muddled. There are now a number of places where we have something like _isAcidTable = AcidUtils.isFullAcidTable(table)_ and a later getter _boolean isAcidTable() \{ return isAcidTable;\}_ Need to clean it up so that there is a isTransactional(Table) that checks transactional=true setting and isAcid(Table) to mean full ACID and isInsertOnly(Table) to mean MM tables. This would accurately describe the semantics of the tables. was: With the addition of MM tables (_AcidUtils.isInsertOnlyTable(table)_) the methods in AcidUtils and dependent places are very muddled. Need to clean it up so that there is a isTransactional(Table) that checks transactional=true setting and isAcid(Table) to mean full ACID and isInsertOnly(Table) to mean MM tables. This would accurately describe the semantics of the tables. > clean up isAcidTable() API vs isInsertOnlyTable() > -- > > Key: HIVE-18124 > URL: https://issues.apache.org/jira/browse/HIVE-18124 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18124.01.patch, HIVE-18124.02.patch, > HIVE-18124.03.patch, HIVE-18124.04.patch, HIVE-18124.05.patch > > > With the addition of MM tables (_AcidUtils.isInsertOnlyTable(table)_) the > methods in AcidUtils and dependent places are very muddled. There are now a > number of places where we have something like _isAcidTable = > AcidUtils.isFullAcidTable(table)_ and a later getter > _boolean isAcidTable() \{ return isAcidTable;\}_ > Need to clean it up so that there is a isTransactional(Table) that checks > transactional=true setting and isAcid(Table) to mean full ACID and > isInsertOnly(Table) to mean MM tables. > This would accurately describe the semantics of the tables. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18201) Disable XPROD_EDGE for sq_count_check() created for scalar subqueries
[ https://issues.apache.org/jira/browse/HIVE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289600#comment-16289600 ] Hive QA commented on HIVE-18201: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12901788/HIVE-18201.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 59 failed/errored test(s), 11528 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_join_filters] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_join_nulls] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mapjoin2] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mapjoin_hint] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in_having] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_complex_all] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_mapjoin] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_join_filters] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_multi_output_select] (batchId=165) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=113) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query61] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query63] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query65] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query66] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query67] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query68] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query69] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query70] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query72] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query73] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query77] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query79] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query7] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query80] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query82] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query84] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query85] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query86] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query87] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query88] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query89] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query8] (batchId=246) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query90] (batchId=246)
[jira] [Commented] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289598#comment-16289598 ] Andrew Sherman commented on HIVE-18268: --- I think your change is to allow a ? as a string literal (if it is quoted). But in your test {noformat} String sql = "select 1 from x where qa=\"?\""; HivePreparedStatement ps = new HivePreparedStatement(connection, client, sessHandle, sql); ps.setString(1, "v"); {noformat} you are setting a parameter. So is the quoted ? supposed to be a literal Thanks > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 2.3.3 > > Attachments: HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;iif(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18250) CBO gets turned off with duplicates in RR error
[ https://issues.apache.org/jira/browse/HIVE-18250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18250: --- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master, thanks for reviewing [~ashutoshc]! > CBO gets turned off with duplicates in RR error > --- > > Key: HIVE-18250 > URL: https://issues.apache.org/jira/browse/HIVE-18250 > Project: Hive > Issue Type: Bug > Components: CBO, Query Planning >Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0 >Reporter: Ashutosh Chauhan >Assignee: Jesus Camacho Rodriguez > Fix For: 3.0.0 > > Attachments: HIVE-18250.01.patch, HIVE-18250.02.patch > > > {code} > create table t1 (a int); > explain select t1.a as a1, min(t1.a) as a from t1 group by t1.a; > {code} > CBO gets turned off with: > {code} > WARN [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] parse.RowResolver: Found > duplicate column alias in RR: null.a => {null, a1, _col0: int} adding null.a > => {null, null, _col1: int} > 2017-12-07T15:27:47,651 ERROR [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] > parse.CalcitePlanner: CBO failed, skipping CBO. > org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Cannot > add column to RR: null.a => _col1: int due to duplication, see previous > warnings > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:3985) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:4313) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1392) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1322) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > {code} > After that non-CBO path completes the query. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-14498) Freshness period for query rewriting using materialized views
[ https://issues.apache.org/jira/browse/HIVE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14498: --- Attachment: HIVE-14498.01.patch > Freshness period for query rewriting using materialized views > - > > Key: HIVE-14498 > URL: https://issues.apache.org/jira/browse/HIVE-14498 > Project: Hive > Issue Type: Sub-task > Components: Materialized views >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14498.01.patch, HIVE-14498.patch > > > Once we have query rewriting in place (HIVE-14496), one of the main issues is > data freshness in the materialized views. > Since we will not support view maintenance at first, we could include a > HiveConf property to configure a max freshness period (_n timeunits_). If a > query comes, and the materialized view has been populated (by create, > refresh, etc.) for a longer period than _n_, then we should not use it for > rewriting the query. > Optionally, we could print a warning for the user indicating that the > materialized was not used because it was not fresh. -- This message was sent by Atlassian JIRA (v6.4.14#64029)