[jira] [Commented] (HIVE-21992) REPL DUMP throws NPE when dumping Create Function event.
[ https://issues.apache.org/jira/browse/HIVE-21992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887706#comment-16887706 ] Hive QA commented on HIVE-21992: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975117/HIVE-21992.02.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16674 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18079/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18079/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18079/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12975117 - PreCommit-HIVE-Build > REPL DUMP throws NPE when dumping Create Function event. > > > Key: HIVE-21992 > URL: https://issues.apache.org/jira/browse/HIVE-21992 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21992.01.patch, HIVE-21992.02.patch > > Time Spent: 40m > Remaining Estimate: 0h > > REPL DUMP throws NPE while dumping Create Function event.It seems, null check > is missing for function.getResourceUris(). > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.repl.dump.io.FunctionSerializer.writeTo(FunctionSerializer.java:54) > at > org.apache.hadoop.hive.ql.parse.repl.dump.events.CreateFunctionHandler.handle(CreateFunctionHandler.java:48) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpEvent(ReplDumpTask.java:304) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2727) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2394) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2066) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1764) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1758) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > FAILED: Execution Error, return code 4 from > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask. > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.repl.dump.io.FunctionSerializer.writeTo(FunctionSerializer.java:54) > at > org.apache.hadoop.hive.ql.parse.repl.dump.events.CreateFunctionHandler.handle(CreateFunctionHandler.java:48) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpEvent(ReplDumpTask.java:304) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) >
[jira] [Updated] (HIVE-21992) REPL DUMP throws NPE when dumping Create Function event.
[ https://issues.apache.org/jira/browse/HIVE-21992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-21992: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) > REPL DUMP throws NPE when dumping Create Function event. > > > Key: HIVE-21992 > URL: https://issues.apache.org/jira/browse/HIVE-21992 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21992.01.patch, HIVE-21992.02.patch > > Time Spent: 40m > Remaining Estimate: 0h > > REPL DUMP throws NPE while dumping Create Function event.It seems, null check > is missing for function.getResourceUris(). > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.repl.dump.io.FunctionSerializer.writeTo(FunctionSerializer.java:54) > at > org.apache.hadoop.hive.ql.parse.repl.dump.events.CreateFunctionHandler.handle(CreateFunctionHandler.java:48) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpEvent(ReplDumpTask.java:304) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2727) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2394) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2066) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1764) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1758) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > FAILED: Execution Error, return code 4 from > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask. > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.repl.dump.io.FunctionSerializer.writeTo(FunctionSerializer.java:54) > at > org.apache.hadoop.hive.ql.parse.repl.dump.events.CreateFunctionHandler.handle(CreateFunctionHandler.java:48) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpEvent(ReplDumpTask.java:304) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2727) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2394) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2066) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1764) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1758) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.ja
[jira] [Commented] (HIVE-21992) REPL DUMP throws NPE when dumping Create Function event.
[ https://issues.apache.org/jira/browse/HIVE-21992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887720#comment-16887720 ] Sankar Hariappan commented on HIVE-21992: - 02.patch committed to master. Thanks [~maheshk114] for the review! > REPL DUMP throws NPE when dumping Create Function event. > > > Key: HIVE-21992 > URL: https://issues.apache.org/jira/browse/HIVE-21992 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21992.01.patch, HIVE-21992.02.patch > > Time Spent: 40m > Remaining Estimate: 0h > > REPL DUMP throws NPE while dumping Create Function event.It seems, null check > is missing for function.getResourceUris(). > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.repl.dump.io.FunctionSerializer.writeTo(FunctionSerializer.java:54) > at > org.apache.hadoop.hive.ql.parse.repl.dump.events.CreateFunctionHandler.handle(CreateFunctionHandler.java:48) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpEvent(ReplDumpTask.java:304) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2727) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2394) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2066) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1764) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1758) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > FAILED: Execution Error, return code 4 from > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask. > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.repl.dump.io.FunctionSerializer.writeTo(FunctionSerializer.java:54) > at > org.apache.hadoop.hive.ql.parse.repl.dump.events.CreateFunctionHandler.handle(CreateFunctionHandler.java:48) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpEvent(ReplDumpTask.java:304) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2727) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2394) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2066) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1764) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1758) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) > at > org.apache.hive.service.cli.operation.SQLOperation.acc
[jira] [Work logged] (HIVE-21992) REPL DUMP throws NPE when dumping Create Function event.
[ https://issues.apache.org/jira/browse/HIVE-21992?focusedWorklogId=278791&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-278791 ] ASF GitHub Bot logged work on HIVE-21992: - Author: ASF GitHub Bot Created on: 18/Jul/19 07:37 Start Date: 18/Jul/19 07:37 Worklog Time Spent: 10m Work Description: sankarh commented on pull request #725: HIVE-21992: REPL DUMP throws NPE when dumping Create Function event. URL: https://github.com/apache/hive/pull/725 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 278791) Time Spent: 50m (was: 40m) > REPL DUMP throws NPE when dumping Create Function event. > > > Key: HIVE-21992 > URL: https://issues.apache.org/jira/browse/HIVE-21992 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21992.01.patch, HIVE-21992.02.patch > > Time Spent: 50m > Remaining Estimate: 0h > > REPL DUMP throws NPE while dumping Create Function event.It seems, null check > is missing for function.getResourceUris(). > {code} > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.repl.dump.io.FunctionSerializer.writeTo(FunctionSerializer.java:54) > at > org.apache.hadoop.hive.ql.parse.repl.dump.events.CreateFunctionHandler.handle(CreateFunctionHandler.java:48) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpEvent(ReplDumpTask.java:304) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2727) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2394) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2066) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1764) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1758) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:324) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:342) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > FAILED: Execution Error, return code 4 from > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask. > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.parse.repl.dump.io.FunctionSerializer.writeTo(FunctionSerializer.java:54) > at > org.apache.hadoop.hive.ql.parse.repl.dump.events.CreateFunctionHandler.handle(CreateFunctionHandler.java:48) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.dumpEvent(ReplDumpTask.java:304) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > o
[jira] [Commented] (HIVE-21912) Implement BlacklistingLlapMetricsListener
[ https://issues.apache.org/jira/browse/HIVE-21912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887726#comment-16887726 ] Hive QA commented on HIVE-21912: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 35s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} llap-common in master has 90 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 24s{color} | {color:blue} llap-client in master has 26 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 24s{color} | {color:blue} llap-tez in master has 17 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 46s{color} | {color:blue} llap-server in master has 83 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 22s{color} | {color:red} llap-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 21s{color} | {color:red} llap-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 21s{color} | {color:red} llap-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} The patch common passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} llap-common: The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} llap-client: The patch generated 0 new + 15 unchanged - 2 fixed = 15 total (was 17) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch llap-tez passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} llap-server: The patch generated 0 new + 0 unchanged - 7 fixed = 0 total (was 7) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s{color} | {color:red} llap-tez generated 1 new + 17 unchanged - 0 fixed = 18 total (was 17) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s{color} | {color:red} llap-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 23s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:llap-tez | | | org.apache.hadoop.hive.llap.tezplugins.metrics.BlacklistingLlapMetricsListener.newClusterMetrics(Map) makes inefficient use of keySet iterator instead of entrySet iterator At BlacklistingLlapMetricsListener.java:keySet iterator instead of entrySet iterator At BlacklistingLlapMetricsListener.java:[line 112] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.37.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.4.patch, HIVE-21637.5.patch, HIVE-21637.6.patch, > HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21912) Implement BlacklistingLlapMetricsListener
[ https://issues.apache.org/jira/browse/HIVE-21912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887748#comment-16887748 ] Hive QA commented on HIVE-21912: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975120/HIVE-21912.11.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16681 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18080/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18080/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18080/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12975120 - PreCommit-HIVE-Build > Implement BlacklistingLlapMetricsListener > - > > Key: HIVE-21912 > URL: https://issues.apache.org/jira/browse/HIVE-21912 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21912.10.patch, HIVE-21912.11.patch, > HIVE-21912.2.patch, HIVE-21912.3.patch, HIVE-21912.4.patch, > HIVE-21912.5.patch, HIVE-21912.6.patch, HIVE-21912.7.patch, > HIVE-21912.8.patch, HIVE-21912.9.patch, HIVE-21912.patch, > HIVE-21912.wip-2.patch, HIVE-21912.wip.patch > > Time Spent: 6h > Remaining Estimate: 0h > > We should implement a DaemonStatisticsHandler which: > * If a node average response time is bigger than 150% (configurable) of the > other nodes > * If the other nodes has enough empty executors to handle the requests > Then disables the limping node. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21771) Support partition filter (where clause) in REPL dump command (Bootstrap Dump)
[ https://issues.apache.org/jira/browse/HIVE-21771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887784#comment-16887784 ] Hive QA commented on HIVE-21771: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 28s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 3s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 10 new + 570 unchanged - 0 fixed = 580 total (was 570) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 12s{color} | {color:red} ql generated 9 new + 2241 unchanged - 9 fixed = 2250 total (was 2250) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Should org.apache.hadoop.hive.ql.parse.HiveParser$DFA239 be a _static_ inner class? At HiveParser.java:inner class? At HiveParser.java:[lines 48849-48862] | | | Dead store to LA31_128 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:[line 48589] | | | Dead store to LA31_130 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:[line 48602] | | | Dead store to LA31_132 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:[line 48615] | | | Dead store to LA31_134 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:[line 48628] | | | Dead store to LA31_136 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA31.specialStateTransition(int, IntStream) At HiveParser.java:[line 48641] | | | Dead store to LA31_138 in org.apache.hadoop.hive.ql.parse.HiveParser$
[jira] [Updated] (HIVE-21912) Implement BlacklistingLlapMetricsListener
[ https://issues.apache.org/jira/browse/HIVE-21912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21912: -- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks [~odraese] and [~szita] for the review! > Implement BlacklistingLlapMetricsListener > - > > Key: HIVE-21912 > URL: https://issues.apache.org/jira/browse/HIVE-21912 > Project: Hive > Issue Type: Sub-task > Components: llap, Tez >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21912.10.patch, HIVE-21912.11.patch, > HIVE-21912.2.patch, HIVE-21912.3.patch, HIVE-21912.4.patch, > HIVE-21912.5.patch, HIVE-21912.6.patch, HIVE-21912.7.patch, > HIVE-21912.8.patch, HIVE-21912.9.patch, HIVE-21912.patch, > HIVE-21912.wip-2.patch, HIVE-21912.wip.patch > > Time Spent: 6h > Remaining Estimate: 0h > > We should implement a DaemonStatisticsHandler which: > * If a node average response time is bigger than 150% (configurable) of the > other nodes > * If the other nodes has enough empty executors to handle the requests > Then disables the limping node. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-22000) Trying to Create a Connection to an Oracle Data
[ https://issues.apache.org/jira/browse/HIVE-22000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887798#comment-16887798 ] lushiqin commented on HIVE-22000: - test > Trying to Create a Connection to an Oracle Data > --- > > Key: HIVE-22000 > URL: https://issues.apache.org/jira/browse/HIVE-22000 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.1 > Environment: hdfs version > Hadoop 3.2.0 > Source code repository https://github.com/apache/hadoop.git -r > e97acb3bd8f3befd27418996fa5d4b50bf2e17bf > Compiled by sunilg on 2019-01-08T06:08Z > Compiled with protoc 2.5.0 > From source with checksum d3f0795ed0d9dc378e2c785d3668f39 > java -version > openjdk version "1.8.0_201" > OpenJDK Runtime Environment (build 1.8.0_201-b09) > OpenJDK 64-Bit Server VM (build 25.201-b09, mixed mode) > hive --version > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hadoop/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Hive 3.1.1 > Git git://daijymacpro-2.local/Users/daijy/commit/hive -r > f4e0529634b6231a0072295da48af466cf2f10b7 > Compiled by daijy on Tue Oct 23 17:19:24 PDT 2018 > From source with checksum 6deca5a8401bbb6c6b49898be6fcb80e >Reporter: rob >Priority: Blocker > > Hi > I am trying to connect to an oracle database. I have put the relevant jar in > the lib foldler > ls -la hive/lib/ > -rw-rw-r-- 1 hadoop hadoop 4036257 Jul 12 15:37 ojdbc8.jar > > Using beeline > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hadoop/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Beeline version 3.1.1 by Apache Hive > beeline> !scan > scan complete in 214ms > 8 driver classes found > Compliant Version Driver Class > yes 6.2 com.microsoft.sqlserver.jdbc.SQLServerDriver > no 5.1 com.mysql.jdbc.Driver > yes 12.2 oracle.jdbc.OracleDriver > yes 1.16 org.apache.calcite.avatica.remote.Driver > yes 1.16 org.apache.calcite.jdbc.Driver > yes 10.14 org.apache.derby.jdbc.AutoloadedDriver > no 3.1 org.apache.hive.jdbc.HiveDriver > no 9.4 org.postgresql.Driver > > If I try and connect to the database via the beeline command line > > beeline -u > jdbc:oracle:thin:@//robtest1.ceo8wqiptv9v.eu-west-1.rds.amazonaws.com:1521/ORCL > -n dbadmin -p > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hadoop/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Connecting to > jdbc:oracle:thin:@//robtest1.ceo8wqiptv9v.eu-west-1.rds.amazonaws.com:1521/ORCL > Connected to: Oracle (version Oracle Database 12c Enterprise Edition Release > 12.1.0.2.0 - 64bit Production > With the Partitioning, OLAP, Advanced Analytics and Real Application Testing > options) > Driver: Oracle JDBC driver (version 12.2.0.1.0) > Error: READ_COMMITTED and SERIALIZABLE are the only valid transaction levels > (state=9,code=17030) > Beeline version 3.1.1 by Apache Hive > 0: jdbc:oracle:thin:@//robtest1.ceo8wqiptv9v.> select count(*) from > user_tables; > +---+ > | COUNT(*) | > +---+ > | 1 | > +---+ > 1 row selected (0.376 seconds) > 0: jdbc:oracle:thin:@//robtest1.ceo8wqiptv9v.> select count(*) from > RobOracleTable; > +---+ > | COUNT(*) | > +---+ > | 3 | > +---+ > 1 row selected (0.027 seconds) > > When I try and create a table I get > > 0: jdbc:hive2://> CREATE EXTERNAL TABLE RobOracleTable( > . . . . . . . . > id INT, > . . . . . . . . > names STRING > . . . . . . . . > ) > . . . . . . . . > STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler' > . . . . . . . . > TBLPROPERTIES ( > . . . . . . . . > "hive.sql.database.type" = "ORACLE", > . . . . . . . . > "hive.sql.jdbc.driver" = "oracle
[jira] [Issue Comment Deleted] (HIVE-22000) Trying to Create a Connection to an Oracle Data
[ https://issues.apache.org/jira/browse/HIVE-22000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lushiqin updated HIVE-22000: Comment: was deleted (was: test) > Trying to Create a Connection to an Oracle Data > --- > > Key: HIVE-22000 > URL: https://issues.apache.org/jira/browse/HIVE-22000 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.1.1 > Environment: hdfs version > Hadoop 3.2.0 > Source code repository https://github.com/apache/hadoop.git -r > e97acb3bd8f3befd27418996fa5d4b50bf2e17bf > Compiled by sunilg on 2019-01-08T06:08Z > Compiled with protoc 2.5.0 > From source with checksum d3f0795ed0d9dc378e2c785d3668f39 > java -version > openjdk version "1.8.0_201" > OpenJDK Runtime Environment (build 1.8.0_201-b09) > OpenJDK 64-Bit Server VM (build 25.201-b09, mixed mode) > hive --version > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hadoop/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Hive 3.1.1 > Git git://daijymacpro-2.local/Users/daijy/commit/hive -r > f4e0529634b6231a0072295da48af466cf2f10b7 > Compiled by daijy on Tue Oct 23 17:19:24 PDT 2018 > From source with checksum 6deca5a8401bbb6c6b49898be6fcb80e >Reporter: rob >Priority: Blocker > > Hi > I am trying to connect to an oracle database. I have put the relevant jar in > the lib foldler > ls -la hive/lib/ > -rw-rw-r-- 1 hadoop hadoop 4036257 Jul 12 15:37 ojdbc8.jar > > Using beeline > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hadoop/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Beeline version 3.1.1 by Apache Hive > beeline> !scan > scan complete in 214ms > 8 driver classes found > Compliant Version Driver Class > yes 6.2 com.microsoft.sqlserver.jdbc.SQLServerDriver > no 5.1 com.mysql.jdbc.Driver > yes 12.2 oracle.jdbc.OracleDriver > yes 1.16 org.apache.calcite.avatica.remote.Driver > yes 1.16 org.apache.calcite.jdbc.Driver > yes 10.14 org.apache.derby.jdbc.AutoloadedDriver > no 3.1 org.apache.hive.jdbc.HiveDriver > no 9.4 org.postgresql.Driver > > If I try and connect to the database via the beeline command line > > beeline -u > jdbc:oracle:thin:@//robtest1.ceo8wqiptv9v.eu-west-1.rds.amazonaws.com:1521/ORCL > -n dbadmin -p > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/hadoop/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Connecting to > jdbc:oracle:thin:@//robtest1.ceo8wqiptv9v.eu-west-1.rds.amazonaws.com:1521/ORCL > Connected to: Oracle (version Oracle Database 12c Enterprise Edition Release > 12.1.0.2.0 - 64bit Production > With the Partitioning, OLAP, Advanced Analytics and Real Application Testing > options) > Driver: Oracle JDBC driver (version 12.2.0.1.0) > Error: READ_COMMITTED and SERIALIZABLE are the only valid transaction levels > (state=9,code=17030) > Beeline version 3.1.1 by Apache Hive > 0: jdbc:oracle:thin:@//robtest1.ceo8wqiptv9v.> select count(*) from > user_tables; > +---+ > | COUNT(*) | > +---+ > | 1 | > +---+ > 1 row selected (0.376 seconds) > 0: jdbc:oracle:thin:@//robtest1.ceo8wqiptv9v.> select count(*) from > RobOracleTable; > +---+ > | COUNT(*) | > +---+ > | 3 | > +---+ > 1 row selected (0.027 seconds) > > When I try and create a table I get > > 0: jdbc:hive2://> CREATE EXTERNAL TABLE RobOracleTable( > . . . . . . . . > id INT, > . . . . . . . . > names STRING > . . . . . . . . > ) > . . . . . . . . > STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler' > . . . . . . . . > TBLPROPERTIES ( > . . . . . . . . > "hive.sql.database.type" = "ORACLE", > . . . . . . . . > "hive.sql.jdbc.driver" = "oracle.jdbc.OracleDriver", > .
[jira] [Commented] (HIVE-21771) Support partition filter (where clause) in REPL dump command (Bootstrap Dump)
[ https://issues.apache.org/jira/browse/HIVE-21771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887819#comment-16887819 ] Hive QA commented on HIVE-21771: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975123/HIVE-21771.02.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16680 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeader.testHouseKeepingThreadExistence (batchId=241) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18081/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18081/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18081/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12975123 - PreCommit-HIVE-Build > Support partition filter (where clause) in REPL dump command (Bootstrap Dump) > - > > Key: HIVE-21771 > URL: https://issues.apache.org/jira/browse/HIVE-21771 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21771.01.patch, HIVE-21771.02.patch > > Time Spent: 10m > Remaining Estimate: 0h > > *Bootstrap for managed table* > User should be allowed to execute REPL DUMP with where clause. The where > clause should support filtering out partition from dump. Format of the where > clause should be similar to *"REPL DUMP dbname from 10 where 't0' where key < > 10,'t1'* where key = 3, '(t2*)|'t3' where key > 3".* For initial version, > very basic filter condition will be supported and later the complexity will > be increased as and when required. > * From the AST generated for the where clause, extract the table information. > * Generate AST for each table. > * List the partition for each table using the AST generated for each table > using the same metastore API used by select query. > * During bootstrap load use the partition list to dump the partitions. > * During incremental dump, use the list to filter out the event. > In case of bootstrap load, all the tables of the database will be scanned and > * If table is not partitioned, then it will be dumped. > * If key provided in the filter condition for the table is not a partition > column, then dump will fail. > * If table is not mentioned in the where clause, then all partitions of the > table will be dumped. > * All the partitioned of the table satisfying the where clause will be > dumped. > *Incremental for managed table (Not part of this patch)* > In case of Incremental Dump, the events from the notification log will be > scanned and once the partition spec is extracted from the event, the > partition spec will be filtered against the condition. > * If table is not partitioned then the event will be added to the dump. > * If key mentioned is not a partition column, then dump will fail. > * If the table is not mentioned in the filter then event will be added to > the dump. > * If the event is multi partitioned, then the event will be added to the > dump. (Filtering out redundant partitions from message will be done as part > of separate task). > * If the partition spec matches the filter, then the event will be added to > the dump*.* > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887891#comment-16887891 ] Hive QA commented on HIVE-21637: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 26s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} storage-api in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 26s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 14s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 8s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} beeline in master has 44 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} standalone-metastore/metastore-tools/metastore-benchmarks in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 44s{color} | {color:blue} itests/util in master has 44 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} storage-api: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} standalone-metastore/metastore-common: The patch generated 10 new + 496 unchanged - 4 fixed = 506 total (was 500) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 167 new + 2232 unchanged - 65 fixed = 2399 total (was 2297) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 0s{color} | {color:red} ql: The patch generated 82 new + 2262 unchanged - 32 fixed = 2344 total (was 2294) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} standalone-metastore/metastore-tools/tools-common: The patch generated 5 new + 31 unchanged - 0 fixed = 36 total (was 31) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} itests/hcata
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887892#comment-16887892 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975136/HIVE-21637.37.patch {color:green}SUCCESS:{color} +1 due to 124 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 69 failed/errored test(s), 16675 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_stats2] (batchId=51) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_update_status] (batchId=99) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_update_status] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_update_status_disable_bitvector] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock1] (batchId=8) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock2] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock3] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_wide_table] (batchId=96) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_2_exim_basic] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_3_exim_metadata] (batchId=63) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_nonpart] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part2] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part] (batchId=52) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_sizebug] (batchId=89) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=178) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create] (batchId=180) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_time_window] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_describe] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sample10_mm] (batchId=176) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge_stats] (batchId=182) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[allow_change_col_type_par_neg] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_table_constraint_invalid_fk_tbl1] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_table_constraint_invalid_pk_tbl] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_invalid_constraint2] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into1] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into2] (batchId=102) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into3] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into4] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg1] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg2] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg3] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg4] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg5] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDrive
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887894#comment-16887894 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975136/HIVE-21637.37.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18083/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18083/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18083/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12975136/HIVE-21637.37.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12975136 - PreCommit-HIVE-Build > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.4.patch, HIVE-21637.5.patch, HIVE-21637.6.patch, > HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Reopened] (HIVE-21173) Upgrade to the latest release of Apache Thrift
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Lavati reopened HIVE-21173: - Assignee: David Lavati I'm reopening this, to apply 0.9.3-1, which addressed the mentioned CVE. HIVE-21000 will eventually surpass this, but we're kind of blocked there without a new accumulo release. > Upgrade to the latest release of Apache Thrift > -- > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04.This release includes a security fix for > THRIFT-4506 (CVE-2018-1320). Updating thrift to the latest version will > remove that vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Lavati updated HIVE-21173: Summary: Upgrade Apache Thrift to 0.9.3-1 (was: Upgrade to the latest release of Apache Thrift) > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04.This release includes a security fix for > THRIFT-4506 (CVE-2018-1320). Updating thrift to the latest version will > remove that vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Work logged] (HIVE-21831) Stats should be reset correctly during load of a partitioned ACID table
[ https://issues.apache.org/jira/browse/HIVE-21831?focusedWorklogId=278921&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-278921 ] ASF GitHub Bot logged work on HIVE-21831: - Author: ASF GitHub Bot Created on: 18/Jul/19 11:49 Start Date: 18/Jul/19 11:49 Worklog Time Spent: 10m Work Description: dlavati commented on pull request #659: HIVE-21831: Stats should be reset correctly during load of a partitioned ACID table URL: https://github.com/apache/hive/pull/659 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 278921) Time Spent: 0.5h (was: 20m) > Stats should be reset correctly during load of a partitioned ACID table > --- > > Key: HIVE-21831 > URL: https://issues.apache.org/jira/browse/HIVE-21831 > Project: Hive > Issue Type: Bug > Components: Hive, Import/Export >Affects Versions: 3.0.0, 3.1.0, 3.1.1 >Reporter: David Lavati >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21831.01.patch, HIVE-21831.02.patch, > HIVE-21831.02.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > While running something similar to the following example, I noticed that an > import of a partitioned ACID table using the ORC format fails to provide > table statistics: > {code:java} > set hive.stats.autogather=true; > set hive.stats.column.autogather=true; > set hive.fetch.task.conversion=none; > set hive.support.concurrency=true; > set hive.default.fileformat.managed=ORC; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create transactional table int_src (foo int, bar int); > insert into int_src select 1,1; > create transactional table int_exp(foo int) partitioned by (bar int); > insert into int_exp select * from int_src; > select count(*) from int_exp; > create transactional table int_imp(foo int) partitioned by (bar int); > EXPORT TABLE int_exp to '/tmp/expint'; > IMPORT TABLE int_imp FROM '/tmp/expint'; > select count(*) FROM int_imp; > {code} > The count returned 0 (opposed to 1, but even for 100k order of records it was > 0) and correct statistics were only available after running compute > statistics. > > This was unique to ACID + partitioning + ORC, but this isn't the expected > behavior. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-21173: -- Labels: pull-request-available (was: ) > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04.This release includes a security fix for > THRIFT-4506 (CVE-2018-1320). Updating thrift to the latest version will > remove that vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Work logged] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?focusedWorklogId=278928&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-278928 ] ASF GitHub Bot logged work on HIVE-21173: - Author: ASF GitHub Bot Created on: 18/Jul/19 11:57 Start Date: 18/Jul/19 11:57 Worklog Time Spent: 10m Work Description: dlavati commented on pull request #730: HIVE-21173 Upgrade Apache Thrift to 0.9.3-1 URL: https://github.com/apache/hive/pull/730 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 278928) Time Spent: 10m Remaining Estimate: 0h > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04.This release includes a security fix for > THRIFT-4506 (CVE-2018-1320). Updating thrift to the latest version will > remove that vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Lavati updated HIVE-21173: Attachment: HIVE-21173.01.patch Status: Patch Available (was: Reopened) > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04.This release includes a security fix for > THRIFT-4506 (CVE-2018-1320). Updating thrift to the latest version will > remove that vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query
[ https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887927#comment-16887927 ] Rajesh Balamohan commented on HIVE-21778: - +1 > CBO: "Struct is not null" gets evaluated as `nullable` always causing filter > miss in the query > -- > > Key: HIVE-21778 > URL: https://issues.apache.org/jira/browse/HIVE-21778 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 4.0.0, 2.3.5 >Reporter: Rajesh Balamohan >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-21778.1.patch, test_null.q, test_null.q.out > > > {noformat} > drop table if exists test_struct; > CREATE external TABLE test_struct > ( > f1 string, > demo_struct struct, > datestr string > ); > set hive.cbo.enable=true; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note > that demo_struct filter is not added here > Filter Operator > predicate: (datestr = '2019-01-01') (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > set hive.cbo.enable=false; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) <- Note that demo_struct filter is added when CBO is > turned off > Filter Operator > predicate: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > {noformat} > In CalcitePlanner::genFilterRelNode, the following code misses to evaluate > this filter. > {noformat} > RexNode factoredFilterExpr = RexUtil > .pullFactors(cluster.getRexBuilder(), convertedFilterExpr); > {noformat} > Note that even if we add `demo_struct.f1` it would end up pushing the filter > correctly. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (HIVE-16690) Configure Tez cartesian product edge based on LLAP cluster size
[ https://issues.apache.org/jira/browse/HIVE-16690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor reassigned HIVE-16690: --- Assignee: Laszlo Bodor (was: Zhiyuan Yang) > Configure Tez cartesian product edge based on LLAP cluster size > --- > > Key: HIVE-16690 > URL: https://issues.apache.org/jira/browse/HIVE-16690 > Project: Hive > Issue Type: Bug >Reporter: Zhiyuan Yang >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-16690.1.patch, HIVE-16690.2.patch, > HIVE-16690.2.patch, HIVE-16690.2.patch, HIVE-16690.addendum.patch > > > In HIVE-14731 we are using default value for target parallelism of fair > cartesian product edge. Ideally this should be set according to cluster size. > In case of LLAP it's pretty easy to get cluster size, i.e., number of > executors. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-16690) Configure Tez cartesian product edge based on LLAP cluster size
[ https://issues.apache.org/jira/browse/HIVE-16690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor updated HIVE-16690: Attachment: HIVE-16690.2.patch > Configure Tez cartesian product edge based on LLAP cluster size > --- > > Key: HIVE-16690 > URL: https://issues.apache.org/jira/browse/HIVE-16690 > Project: Hive > Issue Type: Bug >Reporter: Zhiyuan Yang >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-16690.1.patch, HIVE-16690.2.patch, > HIVE-16690.2.patch, HIVE-16690.2.patch, HIVE-16690.2.patch, > HIVE-16690.addendum.patch > > > In HIVE-14731 we are using default value for target parallelism of fair > cartesian product edge. Ideally this should be set according to cluster size. > In case of LLAP it's pretty easy to get cluster size, i.e., number of > executors. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-18227) Tez parallel execution fail
[ https://issues.apache.org/jira/browse/HIVE-18227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887960#comment-16887960 ] Laszlo Bodor commented on HIVE-18227: - parallel setting has been changed in HIVE-21646, reuploading a patch with only the test cases > Tez parallel execution fail > --- > > Key: HIVE-18227 > URL: https://issues.apache.org/jira/browse/HIVE-18227 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18227.1.patch, HIVE-18227.2.patch, > HIVE-18227.3.patch > > > Running tez Dag in parallel within a session fail. Here is the test case: > {code} > set hive.exec.parallel=true; > set hive.merge.tezfiles=true; > set tez.grouping.max-size=10; > set tez.grouping.min-size=1; > from student > insert overwrite table student4 select * > insert overwrite table student5 select * > insert overwrite table student6 select *; > {code} > The merge task run in parallel and result the exception: > {code} > org.apache.tez.dag.api.TezException: App master already running a DAG > at > org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1255) > at > org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:118) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:161) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7471) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (HIVE-18227) Tez parallel execution fail
[ https://issues.apache.org/jira/browse/HIVE-18227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor reassigned HIVE-18227: --- Assignee: Laszlo Bodor (was: Daniel Dai) > Tez parallel execution fail > --- > > Key: HIVE-18227 > URL: https://issues.apache.org/jira/browse/HIVE-18227 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Daniel Dai >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-18227.1.patch, HIVE-18227.2.patch, > HIVE-18227.3.patch, HIVE-18227.4.patch > > > Running tez Dag in parallel within a session fail. Here is the test case: > {code} > set hive.exec.parallel=true; > set hive.merge.tezfiles=true; > set tez.grouping.max-size=10; > set tez.grouping.min-size=1; > from student > insert overwrite table student4 select * > insert overwrite table student5 select * > insert overwrite table student6 select *; > {code} > The merge task run in parallel and result the exception: > {code} > org.apache.tez.dag.api.TezException: App master already running a DAG > at > org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1255) > at > org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:118) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:161) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7471) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-18227) Tez parallel execution fail
[ https://issues.apache.org/jira/browse/HIVE-18227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor updated HIVE-18227: Attachment: HIVE-18227.4.patch > Tez parallel execution fail > --- > > Key: HIVE-18227 > URL: https://issues.apache.org/jira/browse/HIVE-18227 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Daniel Dai >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-18227.1.patch, HIVE-18227.2.patch, > HIVE-18227.3.patch, HIVE-18227.4.patch > > > Running tez Dag in parallel within a session fail. Here is the test case: > {code} > set hive.exec.parallel=true; > set hive.merge.tezfiles=true; > set tez.grouping.max-size=10; > set tez.grouping.min-size=1; > from student > insert overwrite table student4 select * > insert overwrite table student5 select * > insert overwrite table student6 select *; > {code} > The merge task run in parallel and result the exception: > {code} > org.apache.tez.dag.api.TezException: App master already running a DAG > at > org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1255) > at > org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:118) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:161) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7471) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (HIVE-18227) Tez parallel execution fail
[ https://issues.apache.org/jira/browse/HIVE-18227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor reassigned HIVE-18227: --- Assignee: Daniel Dai (was: Laszlo Bodor) > Tez parallel execution fail > --- > > Key: HIVE-18227 > URL: https://issues.apache.org/jira/browse/HIVE-18227 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18227.1.patch, HIVE-18227.2.patch, > HIVE-18227.3.patch, HIVE-18227.4.patch > > > Running tez Dag in parallel within a session fail. Here is the test case: > {code} > set hive.exec.parallel=true; > set hive.merge.tezfiles=true; > set tez.grouping.max-size=10; > set tez.grouping.min-size=1; > from student > insert overwrite table student4 select * > insert overwrite table student5 select * > insert overwrite table student6 select *; > {code} > The merge task run in parallel and result the exception: > {code} > org.apache.tez.dag.api.TezException: App master already running a DAG > at > org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1255) > at > org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:118) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:161) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7471) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (HIVE-16690) Configure Tez cartesian product edge based on LLAP cluster size
[ https://issues.apache.org/jira/browse/HIVE-16690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor reassigned HIVE-16690: --- Assignee: Zhiyuan Yang (was: Laszlo Bodor) > Configure Tez cartesian product edge based on LLAP cluster size > --- > > Key: HIVE-16690 > URL: https://issues.apache.org/jira/browse/HIVE-16690 > Project: Hive > Issue Type: Bug >Reporter: Zhiyuan Yang >Assignee: Zhiyuan Yang >Priority: Major > Attachments: HIVE-16690.1.patch, HIVE-16690.2.patch, > HIVE-16690.2.patch, HIVE-16690.2.patch, HIVE-16690.2.patch, > HIVE-16690.addendum.patch > > > In HIVE-14731 we are using default value for target parallelism of fair > cartesian product edge. Ideally this should be set according to cluster size. > In case of LLAP it's pretty easy to get cluster size, i.e., number of > executors. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887995#comment-16887995 ] Hive QA commented on HIVE-21173: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 42s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-18084/dev-support/hive-personality.sh | | git revision | master / 374f361 | | Default Java | 1.8.0_111 | | modules | C: . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-18084/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04.This release includes a security fix for > THRIFT-4506 (CVE-2018-1320). Updating thrift to the latest version will > remove that vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-16983) getFileStatus on accessible s3a://[bucket-name]/folder: throws com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error
[ https://issues.apache.org/jira/browse/HIVE-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888001#comment-16888001 ] Alexandre Pastorino commented on HIVE-16983: Hello. I ran into the exact same problem and the cause on my side was that an "External Account" overrode the role-based authorizations for impala and hue only. Which means hive, hdfs, hadoop and spark worked properly but impala was displaying that error when trying to open a file R or W. I hope it helps someone in the fututre! External Account configuration is in the "Administration > External Accounts" menu > getFileStatus on accessible s3a://[bucket-name]/folder: throws > com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon > S3; Status Code: 403; Error Code: 403 Forbidden; > - > > Key: HIVE-16983 > URL: https://issues.apache.org/jira/browse/HIVE-16983 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.1.1 > Environment: Hive 2.1.1 on Ubuntu 14.04 AMI in AWS EC2, connecting to > S3 using s3a:// protocol >Reporter: Alex Baretto >Assignee: Vlad Gudikov >Priority: Major > Fix For: 2.1.1 > > Attachments: HIVE-16983-branch-2.1.patch > > > I've followed various published documentation on integrating Apache Hive > 2.1.1 with AWS S3 using the `s3a://` scheme, configuring `fs.s3a.access.key` > and > `fs.s3a.secret.key` for `hadoop/etc/hadoop/core-site.xml` and > `hive/conf/hive-site.xml`. > I am at the point where I am able to get `hdfs dfs -ls s3a://[bucket-name]/` > to work properly (it returns s3 ls of that bucket). So I know my creds, > bucket access, and overall Hadoop setup is valid. > hdfs dfs -ls s3a://[bucket-name]/ > > drwxrwxrwx - hdfs hdfs 0 2017-06-27 22:43 > s3a://[bucket-name]/files > ...etc. > hdfs dfs -ls s3a://[bucket-name]/files > > drwxrwxrwx - hdfs hdfs 0 2017-06-27 22:43 > s3a://[bucket-name]/files/my-csv.csv > However, when I attempt to access the same s3 resources from hive, e.g. run > any `CREATE SCHEMA` or `CREATE EXTERNAL TABLE` statements using `LOCATION > 's3a://[bucket-name]/files/'`, it fails. > for example: > >CREATE EXTERNAL TABLE IF NOT EXISTS mydb.my_table ( my_table_id string, > >my_tstamp timestamp, my_sig bigint ) ROW FORMAT DELIMITED FIELDS TERMINATED > >BY ',' LOCATION 's3a://[bucket-name]/files/'; > I keep getting this error: > >FAILED: Execution Error, return code 1 from > >org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: > >java.nio.file.AccessDeniedException s3a://[bucket-name]/files: getFileStatus > >on s3a://[bucket-name]/files: > >com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: > >Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: > >C9CF3F9C50EF08D1), S3 Extended Request ID: > >T2xZ87REKvhkvzf+hdPTOh7CA7paRpIp6IrMWnDqNFfDWerkZuAIgBpvxilv6USD0RSxM9ymM6I=) > This makes no sense. I have access to the bucket as one can see in the hdfs > test. And I've added the proper creds to hive-site.xml. > Anyone have any idea what's missing from this equation? -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888025#comment-16888025 ] Hive QA commented on HIVE-21173: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975153/HIVE-21173.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16681 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18084/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18084/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18084/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12975153 - PreCommit-HIVE-Build > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04.This release includes a security fix for > THRIFT-4506 (CVE-2018-1320). Updating thrift to the latest version will > remove that vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Lavati updated HIVE-21173: Description: The project currently depends on libthrift-0.9.3, however thrift released 0.12.0 on 2019-JAN-04. This release includes a security fix for THRIFT-4506 (CVE-2018-1320). Updating thrift to the latest version will remove that vulnerability. Also note the Apache Thrift project does not publish "libfb303" any longer. fb303 is contributed code (in '/contrib') and it has not been maintained. Ps.: 0.9.3.1 also addresses the CVE, see THRIFT-4506 was: The project currently depends on libthrift-0.9.3, however thrift released 0.12.0 on 2019-JAN-04.This release includes a security fix for THRIFT-4506 (CVE-2018-1320). Updating thrift to the latest version will remove that vulnerability. Also note the Apache Thrift project does not publish "libfb303" any longer. fb303 is contributed code (in '/contrib') and it has not been maintained. > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04. This release includes a security fix for THRIFT-4506 > (CVE-2018-1320). Updating thrift to the latest version will remove that > vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. > > Ps.: 0.9.3.1 also addresses the CVE, see THRIFT-4506 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888030#comment-16888030 ] David Lavati commented on HIVE-21173: - There weren't any generated changes, as this release only affected the related jar. > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04. This release includes a security fix for THRIFT-4506 > (CVE-2018-1320). Updating thrift to the latest version will remove that > vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. > > Ps.: 0.9.3.1 also addresses the CVE, see THRIFT-4506 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Comment Edited] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888030#comment-16888030 ] David Lavati edited comment on HIVE-21173 at 7/18/19 2:30 PM: -- There weren't any generated changes, as this release didn't affect that part of the codebase. was (Author: dlavati): There weren't any generated changes, as this release only affected the related jar. > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04. This release includes a security fix for THRIFT-4506 > (CVE-2018-1320). Updating thrift to the latest version will remove that > vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. > > Ps.: 0.9.3.1 also addresses the CVE, see THRIFT-4506 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888038#comment-16888038 ] Laszlo Bodor commented on HIVE-21173: - assuming that 0.9.3-1 doesn't really affect already generated thrift files, +1 > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04. This release includes a security fix for THRIFT-4506 > (CVE-2018-1320). Updating thrift to the latest version will remove that > vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. > > Ps.: 0.9.3.1 also addresses the CVE, see THRIFT-4506 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-16690) Configure Tez cartesian product edge based on LLAP cluster size
[ https://issues.apache.org/jira/browse/HIVE-16690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888050#comment-16888050 ] Hive QA commented on HIVE-16690: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 9s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 3 new + 36 unchanged - 0 fixed = 39 total (was 36) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-18085/dev-support/hive-personality.sh | | git revision | master / 374f361 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-18085/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-18085/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Configure Tez cartesian product edge based on LLAP cluster size > --- > > Key: HIVE-16690 > URL: https://issues.apache.org/jira/browse/HIVE-16690 > Project: Hive > Issue Type: Bug >Reporter: Zhiyuan Yang >Assignee: Zhiyuan Yang >Priority: Major > Attachments: HIVE-16690.1.patch, HIVE-16690.2.patch, > HIVE-16690.2.patch, HIVE-16690.2.patch, HIVE-16690.2.patch, > HIVE-16690.addendum.patch > > > In HIVE-14731 we are using default value for target parallelism of fair > cartesian product edge. Ideally this should be set according to cluster size. > In case of LLAP it's pretty easy to get cluster size, i.e., number of > executors. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-2526) "lastAceesTime" is always zero when executed through "describe extended " unlike "show table extende like " where lastAccessTime is updated.
[ https://issues.apache.org/jira/browse/HIVE-2526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888066#comment-16888066 ] Jim Keenan commented on HIVE-2526: -- Is there an ETA for when a fix for this will be delivered? I am working on a project which urgently required this feature with a major insurance provider > "lastAceesTime" is always zero when executed through "describe extended > " unlike "show table extende like " where > lastAccessTime is updated. > - > > Key: HIVE-2526 > URL: https://issues.apache.org/jira/browse/HIVE-2526 > Project: Hive > Issue Type: Bug >Affects Versions: 0.9.0 > Environment: Linux : SuSE 11 SP1 >Reporter: Rohith Sharma K S >Assignee: Priyadarshini >Priority: Minor > Attachments: HIVE-2526.patch > > > When the table is accessed(load),lastAccessTime is displaying updated > accessTime in > "show table extended like ".But "describe extended " > is always displaying zero. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-16690) Configure Tez cartesian product edge based on LLAP cluster size
[ https://issues.apache.org/jira/browse/HIVE-16690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888083#comment-16888083 ] Hive QA commented on HIVE-16690: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975155/HIVE-16690.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16681 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=163) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18085/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18085/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18085/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12975155 - PreCommit-HIVE-Build > Configure Tez cartesian product edge based on LLAP cluster size > --- > > Key: HIVE-16690 > URL: https://issues.apache.org/jira/browse/HIVE-16690 > Project: Hive > Issue Type: Bug >Reporter: Zhiyuan Yang >Assignee: Zhiyuan Yang >Priority: Major > Attachments: HIVE-16690.1.patch, HIVE-16690.2.patch, > HIVE-16690.2.patch, HIVE-16690.2.patch, HIVE-16690.2.patch, > HIVE-16690.addendum.patch > > > In HIVE-14731 we are using default value for target parallelism of fair > cartesian product edge. Ideally this should be set according to cluster size. > In case of LLAP it's pretty easy to get cluster size, i.e., number of > executors. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-18227) Tez parallel execution fail
[ https://issues.apache.org/jira/browse/HIVE-18227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888094#comment-16888094 ] Hive QA commented on HIVE-18227: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-18086/dev-support/hive-personality.sh | | git revision | master / 374f361 | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-18086/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Tez parallel execution fail > --- > > Key: HIVE-18227 > URL: https://issues.apache.org/jira/browse/HIVE-18227 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18227.1.patch, HIVE-18227.2.patch, > HIVE-18227.3.patch, HIVE-18227.4.patch > > > Running tez Dag in parallel within a session fail. Here is the test case: > {code} > set hive.exec.parallel=true; > set hive.merge.tezfiles=true; > set tez.grouping.max-size=10; > set tez.grouping.min-size=1; > from student > insert overwrite table student4 select * > insert overwrite table student5 select * > insert overwrite table student6 select *; > {code} > The merge task run in parallel and result the exception: > {code} > org.apache.tez.dag.api.TezException: App master already running a DAG > at > org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1255) > at > org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:118) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:161) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7471) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.38.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.4.patch, HIVE-21637.5.patch, > HIVE-21637.6.patch, HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (HIVE-22009) CTLV with user specified location is not honoured
[ https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R reassigned HIVE-22009: - > CTLV with user specified location is not honoured > -- > > Key: HIVE-22009 > URL: https://issues.apache.org/jira/browse/HIVE-22009 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > > Steps to repro : > > {code:java} > CREATE TABLE emp_table (id int, name string, salary int); > insert into emp_table values(1,'a',2); > CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1; > CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION > '/tmp/emp_ext_table'; > show create table emp_ext_table;{code} > > {code:java} > ++ > | createtab_stmt | > ++ > | CREATE EXTERNAL TABLE `emp_ext_table`( | > | `id` int, | > | `name` string, | > | `salary` int) | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' | > | STORED AS INPUTFORMAT | > | 'org.apache.hadoop.mapred.TextInputFormat' | > | OUTPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' | > | LOCATION | > | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' | > | TBLPROPERTIES ( | > | 'bucketing_version'='2', | > | 'transient_lastDdlTime'='1563467962') | > ++{code} > Table Location is not '/tmp/emp_ext_table', instead location is set to > default warehouse path. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21711) Regression caused by HIVE-21279 for blobstorage fs
[ https://issues.apache.org/jira/browse/HIVE-21711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888154#comment-16888154 ] Vineet Garg commented on HIVE-21711: [~prasanth_j] [~gopalv] Can you take a look? > Regression caused by HIVE-21279 for blobstorage fs > -- > > Key: HIVE-21711 > URL: https://issues.apache.org/jira/browse/HIVE-21711 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-21711.1.patch, HIVE-21711.2.patch, > HIVE-21711.3,patch, HIVE-21711.4.patch, HIVE-21711.5.patch > > > HIVE-21279 caused a regression wherein CTAS/create materialized views > statement for blobstorage is now always renaming files. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22009) CTLV with user specified location is not honoured
[ https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22009: -- Attachment: HIVE-22009.patch Status: Patch Available (was: In Progress) > CTLV with user specified location is not honoured > -- > > Key: HIVE-22009 > URL: https://issues.apache.org/jira/browse/HIVE-22009 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22009.patch > > > Steps to repro : > > {code:java} > CREATE TABLE emp_table (id int, name string, salary int); > insert into emp_table values(1,'a',2); > CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1; > CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION > '/tmp/emp_ext_table'; > show create table emp_ext_table;{code} > > {code:java} > ++ > | createtab_stmt | > ++ > | CREATE EXTERNAL TABLE `emp_ext_table`( | > | `id` int, | > | `name` string, | > | `salary` int) | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' | > | STORED AS INPUTFORMAT | > | 'org.apache.hadoop.mapred.TextInputFormat' | > | OUTPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' | > | LOCATION | > | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' | > | TBLPROPERTIES ( | > | 'bucketing_version'='2', | > | 'transient_lastDdlTime'='1563467962') | > ++{code} > Table Location is not '/tmp/emp_ext_table', instead location is set to > default warehouse path. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Work started] (HIVE-22009) CTLV with user specified location is not honoured
[ https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-22009 started by Naresh P R. - > CTLV with user specified location is not honoured > -- > > Key: HIVE-22009 > URL: https://issues.apache.org/jira/browse/HIVE-22009 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22009.patch > > > Steps to repro : > > {code:java} > CREATE TABLE emp_table (id int, name string, salary int); > insert into emp_table values(1,'a',2); > CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1; > CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION > '/tmp/emp_ext_table'; > show create table emp_ext_table;{code} > > {code:java} > ++ > | createtab_stmt | > ++ > | CREATE EXTERNAL TABLE `emp_ext_table`( | > | `id` int, | > | `name` string, | > | `salary` int) | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' | > | STORED AS INPUTFORMAT | > | 'org.apache.hadoop.mapred.TextInputFormat' | > | OUTPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' | > | LOCATION | > | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' | > | TBLPROPERTIES ( | > | 'bucketing_version'='2', | > | 'transient_lastDdlTime'='1563467962') | > ++{code} > Table Location is not '/tmp/emp_ext_table', instead location is set to > default warehouse path. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-18227) Tez parallel execution fail
[ https://issues.apache.org/jira/browse/HIVE-18227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888165#comment-16888165 ] Hive QA commented on HIVE-18227: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975159/HIVE-18227.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16682 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18086/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18086/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18086/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12975159 - PreCommit-HIVE-Build > Tez parallel execution fail > --- > > Key: HIVE-18227 > URL: https://issues.apache.org/jira/browse/HIVE-18227 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18227.1.patch, HIVE-18227.2.patch, > HIVE-18227.3.patch, HIVE-18227.4.patch > > > Running tez Dag in parallel within a session fail. Here is the test case: > {code} > set hive.exec.parallel=true; > set hive.merge.tezfiles=true; > set tez.grouping.max-size=10; > set tez.grouping.min-size=1; > from student > insert overwrite table student4 select * > insert overwrite table student5 select * > insert overwrite table student6 select *; > {code} > The merge task run in parallel and result the exception: > {code} > org.apache.tez.dag.api.TezException: App master already running a DAG > at > org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1255) > at > org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:118) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:161) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7471) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-18227) Tez parallel execution fail
[ https://issues.apache.org/jira/browse/HIVE-18227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888166#comment-16888166 ] Hive QA commented on HIVE-18227: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975159/HIVE-18227.4.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18087/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18087/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18087/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12975159/HIVE-18227.4.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12975159 - PreCommit-HIVE-Build > Tez parallel execution fail > --- > > Key: HIVE-18227 > URL: https://issues.apache.org/jira/browse/HIVE-18227 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-18227.1.patch, HIVE-18227.2.patch, > HIVE-18227.3.patch, HIVE-18227.4.patch > > > Running tez Dag in parallel within a session fail. Here is the test case: > {code} > set hive.exec.parallel=true; > set hive.merge.tezfiles=true; > set tez.grouping.max-size=10; > set tez.grouping.min-size=1; > from student > insert overwrite table student4 select * > insert overwrite table student5 select * > insert overwrite table student6 select *; > {code} > The merge task run in parallel and result the exception: > {code} > org.apache.tez.dag.api.TezException: App master already running a DAG > at > org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1255) > at > org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:118) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:161) > at > org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7471) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267) > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-16983) getFileStatus on accessible s3a://[bucket-name]/folder: throws com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error
[ https://issues.apache.org/jira/browse/HIVE-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888170#comment-16888170 ] Hive QA commented on HIVE-16983: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12876365/HIVE-16983-branch-2.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18088/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18088/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18088/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2019-07-18 17:07:19.907 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-18088/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z branch-2.1 ]] + [[ -d apache-github-branch-2.1-source ]] + [[ ! -d apache-github-branch-2.1-source/.git ]] + [[ ! -d apache-github-branch-2.1-source ]] + date '+%Y-%m-%d %T.%3N' 2019-07-18 17:07:19.956 + cd apache-github-branch-2.1-source + git fetch origin >From https://github.com/apache/hive 7fecb6f..7534f82 branch-1 -> origin/branch-1 fd2f7c8..2039350 branch-2 -> origin/branch-2 93163cb..9fb2238 branch-2.3 -> origin/branch-2.3 31a417e..91c243c branch-3 -> origin/branch-3 3e16420..378083e branch-3.1 -> origin/branch-3.1 e7f2fccd..374f361 master -> origin/master * [new branch] revert-648-hive21783 -> origin/revert-648-hive21783 + git reset --hard HEAD HEAD is now at 292a98f HIVE-16480: Empty vector batches of floats or doubles gets EOFException (Owen O'Malley via Jesus Camacho Rodriguez) + git clean -f -d + git checkout branch-2.1 Already on 'branch-2.1' Your branch is up-to-date with 'origin/branch-2.1'. + git reset --hard origin/branch-2.1 HEAD is now at 292a98f HIVE-16480: Empty vector batches of floats or doubles gets EOFException (Owen O'Malley via Jesus Camacho Rodriguez) + git merge --ff-only origin/branch-2.1 Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2019-07-18 17:07:37.288 + rm -rf ../yetus_PreCommit-HIVE-Build-18088 + mkdir ../yetus_PreCommit-HIVE-Build-18088 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-18088 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-18088/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/pom.xml: does not exist in index error: patch failed: pom.xml:168 Falling back to three-way merge... Applied patch to 'pom.xml' with conflicts. Going to apply patch with: git apply -p1 error: patch failed: pom.xml:168 Falling back to three-way merge... Applied patch to 'pom.xml' with conflicts. U pom.xml + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-18088 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12876365 - PreCommit-HIVE-Build > getFileStatus on accessible s3a://[bucket-name]/folder: throws > com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon > S3; Status Code: 403; Error Code: 403 Forbidden; > - > > Key: HIVE-16983 > URL: https://issues.apache.org/jira/browse/HIVE-16983 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.1.1 > Environment: Hive 2.1.1 on Ubuntu 14.04 AMI in AWS EC2, connecting to > S3 using s3a:// protocol >Reporter: Alex Baretto >Assignee: Vlad Gudikov >Priority: Major > Fix For: 2.1.1 > > Attachments: HIVE-16983-branch-2.1.patch > > > I've followed various published documentation on integrating Apache Hive > 2.1.1
[jira] [Updated] (HIVE-21838) Hive Metastore Translation: Add API call to tell client why table has limited access
[ https://issues.apache.org/jira/browse/HIVE-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-21838: - Attachment: HIVE-21838.11.patch > Hive Metastore Translation: Add API call to tell client why table has limited > access > > > Key: HIVE-21838 > URL: https://issues.apache.org/jira/browse/HIVE-21838 > Project: Hive > Issue Type: Sub-task >Reporter: Yongzhi Chen >Assignee: Naveen Gangam >Priority: Major > Attachments: HIVE-21838.10.patch, HIVE-21838.11.patch, > HIVE-21838.2.patch, HIVE-21838.3.patch, HIVE-21838.4.patch, > HIVE-21838.5.patch, HIVE-21838.6.patch, HIVE-21838.7.patch, > HIVE-21838.8.patch, HIVE-21838.9.patch, HIVE-21838.patch > > > When a table access type is Read-only or None, we need a way to tell clients > why. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21838) Hive Metastore Translation: Add API call to tell client why table has limited access
[ https://issues.apache.org/jira/browse/HIVE-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-21838: - Status: Open (was: Patch Available) > Hive Metastore Translation: Add API call to tell client why table has limited > access > > > Key: HIVE-21838 > URL: https://issues.apache.org/jira/browse/HIVE-21838 > Project: Hive > Issue Type: Sub-task >Reporter: Yongzhi Chen >Assignee: Naveen Gangam >Priority: Major > Attachments: HIVE-21838.10.patch, HIVE-21838.11.patch, > HIVE-21838.2.patch, HIVE-21838.3.patch, HIVE-21838.4.patch, > HIVE-21838.5.patch, HIVE-21838.6.patch, HIVE-21838.7.patch, > HIVE-21838.8.patch, HIVE-21838.9.patch, HIVE-21838.patch > > > When a table access type is Read-only or None, we need a way to tell clients > why. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21838) Hive Metastore Translation: Add API call to tell client why table has limited access
[ https://issues.apache.org/jira/browse/HIVE-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-21838: - Status: Patch Available (was: Open) > Hive Metastore Translation: Add API call to tell client why table has limited > access > > > Key: HIVE-21838 > URL: https://issues.apache.org/jira/browse/HIVE-21838 > Project: Hive > Issue Type: Sub-task >Reporter: Yongzhi Chen >Assignee: Naveen Gangam >Priority: Major > Attachments: HIVE-21838.10.patch, HIVE-21838.11.patch, > HIVE-21838.2.patch, HIVE-21838.3.patch, HIVE-21838.4.patch, > HIVE-21838.5.patch, HIVE-21838.6.patch, HIVE-21838.7.patch, > HIVE-21838.8.patch, HIVE-21838.9.patch, HIVE-21838.patch > > > When a table access type is Read-only or None, we need a way to tell clients > why. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22009) CTLV with user specified location is not honoured
[ https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22009: -- Attachment: HIVE-22009-branch-3.1.patch > CTLV with user specified location is not honoured > -- > > Key: HIVE-22009 > URL: https://issues.apache.org/jira/browse/HIVE-22009 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22009-branch-3.1.patch, HIVE-22009.patch > > > Steps to repro : > > {code:java} > CREATE TABLE emp_table (id int, name string, salary int); > insert into emp_table values(1,'a',2); > CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1; > CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION > '/tmp/emp_ext_table'; > show create table emp_ext_table;{code} > > {code:java} > ++ > | createtab_stmt | > ++ > | CREATE EXTERNAL TABLE `emp_ext_table`( | > | `id` int, | > | `name` string, | > | `salary` int) | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' | > | STORED AS INPUTFORMAT | > | 'org.apache.hadoop.mapred.TextInputFormat' | > | OUTPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' | > | LOCATION | > | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' | > | TBLPROPERTIES ( | > | 'bucketing_version'='2', | > | 'transient_lastDdlTime'='1563467962') | > ++{code} > Table Location is not '/tmp/emp_ext_table', instead location is set to > default warehouse path. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22009) CTLV with user specified location is not honoured
[ https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22009: -- Affects Version/s: 3.1.1 > CTLV with user specified location is not honoured > -- > > Key: HIVE-22009 > URL: https://issues.apache.org/jira/browse/HIVE-22009 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.1.1 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22009-branch-3.1.patch, HIVE-22009.patch > > > Steps to repro : > > {code:java} > CREATE TABLE emp_table (id int, name string, salary int); > insert into emp_table values(1,'a',2); > CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1; > CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION > '/tmp/emp_ext_table'; > show create table emp_ext_table;{code} > > {code:java} > ++ > | createtab_stmt | > ++ > | CREATE EXTERNAL TABLE `emp_ext_table`( | > | `id` int, | > | `name` string, | > | `salary` int) | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' | > | STORED AS INPUTFORMAT | > | 'org.apache.hadoop.mapred.TextInputFormat' | > | OUTPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' | > | LOCATION | > | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' | > | TBLPROPERTIES ( | > | 'bucketing_version'='2', | > | 'transient_lastDdlTime'='1563467962') | > ++{code} > Table Location is not '/tmp/emp_ext_table', instead location is set to > default warehouse path. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.38.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.4.patch, HIVE-21637.5.patch, > HIVE-21637.6.patch, HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: (was: HIVE-21637.38.patch) > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.4.patch, HIVE-21637.5.patch, > HIVE-21637.6.patch, HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888237#comment-16888237 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975177/HIVE-21637.38.patch {color:green}SUCCESS:{color} +1 due to 124 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 46 failed/errored test(s), 16675 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_stats2] (batchId=51) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock4] (batchId=57) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_nonpart] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part2] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part] (batchId=52) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_sizebug] (batchId=89) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=178) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_time_window] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge_stats] (batchId=182) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[allow_change_col_type_par_neg] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_table_constraint_invalid_fk_tbl1] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_table_constraint_invalid_pk_tbl] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_invalid_constraint2] (batchId=100) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testMultiThreadedSharedCacheOps (batchId=232) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testPartitionSize (batchId=232) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testPrewarm (batchId=232) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testSharedStoreTable (batchId=232) org.apache.hadoop.hive.metastore.cache.TestCachedStoreUpdateUsingEvents.testPartitionOpsForUpdateUsingEvents (batchId=243) org.apache.hadoop.hive.metastore.cache.TestCachedStoreUpdateUsingEvents.testTableColumnStatistics (batchId=243) org.apache.hadoop.hive.metastore.cache.TestCachedStoreUpdateUsingEvents.testTableColumnStatisticsTxnTable (batchId=243) org.apache.hadoop.hive.metastore.cache.TestCachedStoreUpdateUsingEvents.testTableOpsForUpdateUsingEvents (batchId=243) org.apache.hadoop.hive.ql.TestTxnCommands.testParallelInsertAnalyzeStats (batchId=349) org.apache.hadoop.hive.ql.TestTxnCommands.testParallelTruncateAnalyzeStats (batchId=349) org.apache.hadoop.hive.ql.TestTxnCommands3.testCleaner2 (batchId=345) org.apache.hadoop.hive.ql.TestTxnCommands3.testDeleteEventPruningOff (batchId=345) org.apache.hadoop.hive.ql.TestTxnCommands3.testDeleteEventPruningOn (batchId=345) org.apache.hadoop.hive.ql.TestTxnCommands3.testRenameTable (batchId=345) org.apache.hadoop.hive.ql.TestTxnCommands3.testSdpoBucketed (batchId=345) org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testParallelInsertAnalyzeStats (batchId=331) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosExternalTable (batchId=255) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosWithDmlOperations (batchId=255) org.apache.hadoop.hive.ql.parse
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888243#comment-16888243 ] Hive QA commented on HIVE-21637: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 36s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} storage-api in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 29s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 12s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 12s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} beeline in master has 44 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} standalone-metastore/metastore-tools/metastore-benchmarks in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 39s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 50s{color} | {color:blue} itests/util in master has 44 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 42s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} storage-api: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} standalone-metastore/metastore-common: The patch generated 10 new + 496 unchanged - 4 fixed = 506 total (was 500) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 167 new + 2232 unchanged - 65 fixed = 2399 total (was 2297) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s{color} | {color:red} ql: The patch generated 82 new + 2262 unchanged - 32 fixed = 2344 total (was 2294) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} standalone-metastore/metastore-tools/tools-common: The patch generated 5 new + 31 unchanged - 0 fixed = 36 total (was 31) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} itests/hcata
[jira] [Work logged] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?focusedWorklogId=279278&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279278 ] ASF GitHub Bot logged work on HIVE-21173: - Author: ASF GitHub Bot Created on: 18/Jul/19 19:47 Start Date: 18/Jul/19 19:47 Worklog Time Spent: 10m Work Description: odraese commented on issue #730: HIVE-21173 Upgrade Apache Thrift to 0.9.3-1 URL: https://github.com/apache/hive/pull/730#issuecomment-512960676 We actually need to upgrade to Thrift 0.12 due to 4 CVE in the 0.9 line: https://www.cvedetails.com/product/38295/Apache-Thrift.html?vendor_id=45 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 279278) Time Spent: 20m (was: 10m) > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 20m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04. This release includes a security fix for THRIFT-4506 > (CVE-2018-1320). Updating thrift to the latest version will remove that > vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. > > Ps.: 0.9.3.1 also addresses the CVE, see THRIFT-4506 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Work logged] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?focusedWorklogId=279279&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279279 ] ASF GitHub Bot logged work on HIVE-21173: - Author: ASF GitHub Bot Created on: 18/Jul/19 19:50 Start Date: 18/Jul/19 19:50 Worklog Time Spent: 10m Work Description: odraese commented on issue #730: HIVE-21173 Upgrade Apache Thrift to 0.9.3-1 URL: https://github.com/apache/hive/pull/730#issuecomment-512960676 We actually need to upgrade to Thrift 0.12 due to 4 CVE in the 0.9 line: https://www.cvedetails.com/product/38295/Apache-Thrift.html?vendor_id=45 ...or are the CVE fixed in 0.9.3-1 (it seems a recent build from Feb)? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 279279) Time Spent: 0.5h (was: 20m) > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04. This release includes a security fix for THRIFT-4506 > (CVE-2018-1320). Updating thrift to the latest version will remove that > vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. > > Ps.: 0.9.3.1 also addresses the CVE, see THRIFT-4506 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888285#comment-16888285 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975184/HIVE-21637.38.patch {color:green}SUCCESS:{color} +1 due to 124 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 46 failed/errored test(s), 16638 tests executed *Failed tests:* {noformat} TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestObjectStore - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestPartitionProjectionEvaluator - did not produce a TEST-*.xml file (likely timed out) (batchId=232) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_stats2] (batchId=51) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock4] (batchId=57) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_nonpart] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part2] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part] (batchId=52) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_sizebug] (batchId=89) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=178) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_time_window] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge_stats] (batchId=182) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[allow_change_col_type_par_neg] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_table_constraint_invalid_fk_tbl1] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_table_constraint_invalid_pk_tbl] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_invalid_constraint2] (batchId=100) org.apache.hadoop.hive.llap.cache.TestBuddyAllocator.testMTT[2] (batchId=358) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.tablesList (batchId=222) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testMultiThreadedSharedCacheOps (batchId=232) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testPartitionSize (batchId=232) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testPrewarm (batchId=232) org.apache.hadoop.hive.metastore.cache.TestCachedStore.testSharedStoreTable (batchId=232) org.apache.hadoop.hive.metastore.cache.TestCachedStoreUpdateUsingEvents.testPartitionOpsForUpdateUsingEvents (batchId=243) org.apache.hadoop.hive.metastore.cache.TestCachedStoreUpdateUsingEvents.testTableColumnStatistics (batchId=243) org.apache.hadoop.hive.metastore.cache.TestCachedStoreUpdateUsingEvents.testTableColumnStatisticsTxnTable (batchId=243) org.apache.hadoop.hive.metastore.cache.TestCachedStoreUpdateUsingEvents.testTableOpsForUpdateUsingEvents (batchId=243) org.apache.hadoop.hive.ql.TestTxnCommands.testParallelInsertAnalyzeStats (batchId=349) org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testParallelInsertAnalyzeStats (batchId=331) org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testParallelTruncateAnalyzeStats (batchId=331) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosExternalTable (batchId=255) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicat
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888293#comment-16888293 ] Hive QA commented on HIVE-21637: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 35s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} storage-api in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 33s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 17s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 9s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} beeline in master has 44 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 26s{color} | {color:blue} standalone-metastore/metastore-tools/metastore-benchmarks in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 38s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 46s{color} | {color:blue} itests/util in master has 44 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 44s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} storage-api: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} standalone-metastore/metastore-common: The patch generated 10 new + 496 unchanged - 4 fixed = 506 total (was 500) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 167 new + 2232 unchanged - 65 fixed = 2399 total (was 2297) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 4s{color} | {color:red} ql: The patch generated 82 new + 2262 unchanged - 32 fixed = 2344 total (was 2294) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} standalone-metastore/metastore-tools/tools-common: The patch generated 5 new + 31 unchanged - 0 fixed = 36 total (was 31) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} itests/hcata
[jira] [Commented] (HIVE-22009) CTLV with user specified location is not honoured
[ https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888292#comment-16888292 ] Hive QA commented on HIVE-22009: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 16s{color} | {color:red} /data/hiveptest/logs/PreCommit-HIVE-Build-18091/patches/PreCommit-HIVE-Build-18091.patch does not apply to master. Rebase required? Wrong Branch? See http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-18091/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > CTLV with user specified location is not honoured > -- > > Key: HIVE-22009 > URL: https://issues.apache.org/jira/browse/HIVE-22009 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.1.1 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22009-branch-3.1.patch, HIVE-22009.patch > > > Steps to repro : > > {code:java} > CREATE TABLE emp_table (id int, name string, salary int); > insert into emp_table values(1,'a',2); > CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1; > CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION > '/tmp/emp_ext_table'; > show create table emp_ext_table;{code} > > {code:java} > ++ > | createtab_stmt | > ++ > | CREATE EXTERNAL TABLE `emp_ext_table`( | > | `id` int, | > | `name` string, | > | `salary` int) | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' | > | STORED AS INPUTFORMAT | > | 'org.apache.hadoop.mapred.TextInputFormat' | > | OUTPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' | > | LOCATION | > | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' | > | TBLPROPERTIES ( | > | 'bucketing_version'='2', | > | 'transient_lastDdlTime'='1563467962') | > ++{code} > Table Location is not '/tmp/emp_ext_table', instead location is set to > default warehouse path. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Work logged] (HIVE-21173) Upgrade Apache Thrift to 0.9.3-1
[ https://issues.apache.org/jira/browse/HIVE-21173?focusedWorklogId=279301&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-279301 ] ASF GitHub Bot logged work on HIVE-21173: - Author: ASF GitHub Bot Created on: 18/Jul/19 20:10 Start Date: 18/Jul/19 20:10 Worklog Time Spent: 10m Work Description: dlavati commented on issue #730: HIVE-21173 Upgrade Apache Thrift to 0.9.3-1 URL: https://github.com/apache/hive/pull/730#issuecomment-512968915 That would be the best, [HIVE-21000](https://issues.apache.org/jira/browse/HIVE-21000) is the ticket for upgrading to the latest thrift version, but we're kind of blocked there without a new accumulo release (the latest release is still using 0.9.3-1) 0.9.3-1 covers at least CVE-2018-1320 (see [THRIFT-4506](https://issues.apache.org/jira/browse/THRIFT-4506)) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 279301) Time Spent: 40m (was: 0.5h) > Upgrade Apache Thrift to 0.9.3-1 > > > Key: HIVE-21173 > URL: https://issues.apache.org/jira/browse/HIVE-21173 > Project: Hive > Issue Type: Bug > Components: Thrift API >Reporter: James E. King III >Assignee: David Lavati >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21173.01.patch > > Time Spent: 40m > Remaining Estimate: 0h > > The project currently depends on libthrift-0.9.3, however thrift released > 0.12.0 on 2019-JAN-04. This release includes a security fix for THRIFT-4506 > (CVE-2018-1320). Updating thrift to the latest version will remove that > vulnerability. > Also note the Apache Thrift project does not publish "libfb303" any longer. > fb303 is contributed code (in '/contrib') and it has not been maintained. > > Ps.: 0.9.3.1 also addresses the CVE, see THRIFT-4506 -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-22009) CTLV with user specified location is not honoured
[ https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888347#comment-16888347 ] Hive QA commented on HIVE-22009: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975183/HIVE-22009-branch-3.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 118 failed/errored test(s), 14409 tests executed *Failed tests:* {noformat} TestAddPartitions - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestAddPartitionsFromPartSpec - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestAdminUser - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestAggregateStatsCache - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestAlterPartitions - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestAppendPartitions - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=272) TestCachedStore - did not produce a TEST-*.xml file (likely timed out) (batchId=236) TestCatalogNonDefaultClient - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestCatalogNonDefaultSvr - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestCatalogOldClient - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestCatalogs - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestCheckConstraint - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestCloseableThreadLocal - did not produce a TEST-*.xml file (likely timed out) (batchId=330) TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed out) (batchId=236) TestDatabaseName - did not produce a TEST-*.xml file (likely timed out) (batchId=195) TestDatabases - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestDeadline - did not produce a TEST-*.xml file (likely timed out) (batchId=236) TestDefaultConstraint - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestDropPartitions - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=272) TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed out) (batchId=229) TestExchangePartitions - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestFMSketchSerialization - did not produce a TEST-*.xml file (likely timed out) (batchId=237) TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestForeignKey - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestFunctions - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestGetPartitions - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestGetPartitionsUsingProjectionAndFilterSpecs - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestGetTableMeta - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestHLLNoBias - did not produce a TEST-*.xml file (likely timed out) (batchId=237) TestHLLSerialization - did not produce a TEST-*.xml file (likely timed out) (batchId=237) TestHdfsUtils - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestHiveAlterHandler - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestHiveMetaStoreGetMetaConf - did not produce a TEST-*.xml file (likely timed out) (batchId=236) TestHiveMetaStorePartitionSpecs - did not produce a TEST-*.xml file (likely timed out) (batchId=228) TestHiveMetaStoreSchemaMethods - did not produce a TEST-*.xml file (likely timed out) (batchId=236) TestHiveMetaStoreTimeout - did not produce a TEST-*.xml file (likely timed out) (batchId=236) TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) (batchId=236) TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file (likely timed out) (batchId=231) TestHiveMetaToolCommandLine - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestHiveMetastoreCli - did not produce a TEST-*.xml file (likely timed out) (batchId=226) TestHmsServerAuthorization - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestHyperLogLog - did not produce a TEST-*.xml file (likely timed out) (batchId=237) TestHyperLogLogDense - did not produce a TEST-*.xml file (likely timed out) (batchId=237) TestHyperLogLogMerge - did not produce a TEST-*.xml file (likely timed out) (batchId=237) TestHyperLogLogSparse - did not produce a TEST-*.xml file (likely timed out) (batchId=237) TestJSONMessageDeserializer - did not produce a TEST-*.xml file (likely timed out) (batchId=232) TestListParti
[jira] [Assigned] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
[ https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-19113: -- Assignee: Jesus Camacho Rodriguez (was: Deepak Jaiswal) > Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified > -- > > Key: HIVE-19113 > URL: https://issues.apache.org/jira/browse/HIVE-19113 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Jesus Camacho Rodriguez >Priority: Major > > The user's expectation of > "create external table bucketed (key int) clustered by (key) into 4 buckets > stored as orc;" > is that the table will cluster the key into 4 buckets, while the file layout > does not do any actual clustering of rows. > In the absence of a "SORTED BY", this can automatically do a "SORTED BY > (key)" to cluster the keys within the file as expected. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
[ https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19113: --- Status: Patch Available (was: In Progress) > Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified > -- > > Key: HIVE-19113 > URL: https://issues.apache.org/jira/browse/HIVE-19113 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Jesus Camacho Rodriguez >Priority: Major > > The user's expectation of > "create external table bucketed (key int) clustered by (key) into 4 buckets > stored as orc;" > is that the table will cluster the key into 4 buckets, while the file layout > does not do any actual clustering of rows. > In the absence of a "SORTED BY", this can automatically do a "SORTED BY > (key)" to cluster the keys within the file as expected. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Work started] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
[ https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-19113 started by Jesus Camacho Rodriguez. -- > Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified > -- > > Key: HIVE-19113 > URL: https://issues.apache.org/jira/browse/HIVE-19113 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Jesus Camacho Rodriguez >Priority: Major > > The user's expectation of > "create external table bucketed (key int) clustered by (key) into 4 buckets > stored as orc;" > is that the table will cluster the key into 4 buckets, while the file layout > does not do any actual clustering of rows. > In the absence of a "SORTED BY", this can automatically do a "SORTED BY > (key)" to cluster the keys within the file as expected. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
[ https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19113: --- Attachment: HIVE-19113.patch > Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified > -- > > Key: HIVE-19113 > URL: https://issues.apache.org/jira/browse/HIVE-19113 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19113.patch > > > The user's expectation of > "create external table bucketed (key int) clustered by (key) into 4 buckets > stored as orc;" > is that the table will cluster the key into 4 buckets, while the file layout > does not do any actual clustering of rows. > In the absence of a "SORTED BY", this can automatically do a "SORTED BY > (key)" to cluster the keys within the file as expected. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-21225: Attachment: HIVE-21225.15.patch > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.10.patch, > HIVE-21225.11.patch, HIVE-21225.12.patch, HIVE-21225.13.patch, > HIVE-21225.14.patch, HIVE-21225.15.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, > HIVE-21225.7.patch, HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21838) Hive Metastore Translation: Add API call to tell client why table has limited access
[ https://issues.apache.org/jira/browse/HIVE-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888380#comment-16888380 ] Hive QA commented on HIVE-21838: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 29s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 14s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 17s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 21s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} standalone-metastore/metastore-common: The patch generated 19 new + 389 unchanged - 28 fixed = 408 total (was 417) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 89 new + 1013 unchanged - 23 fixed = 1102 total (was 1036) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s{color} | {color:red} itests/hive-unit: The patch generated 20 new + 142 unchanged - 13 fixed = 162 total (was 155) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 16s{color} | {color:red} standalone-metastore/metastore-server generated 1 new + 179 unchanged - 0 fixed = 180 total (was 179) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 19s{color} | {color:red} standalone-metastore_metastore-common generated 2 new + 47 unchanged - 0 fixed = 49 total (was 47) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:standalone-metastore/metastore-server | | | Private method org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(RawStore, Table, EnvironmentContext, List, List, List, List, List, List, List, String) is never called At HiveMetaStore.java:List, List, List, List, List, List, List, String) is never called At HiveMetaStore.java:[lines 1940-1960] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personali
[jira] [Commented] (HIVE-21838) Hive Metastore Translation: Add API call to tell client why table has limited access
[ https://issues.apache.org/jira/browse/HIVE-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888390#comment-16888390 ] Hive QA commented on HIVE-21838: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975182/HIVE-21838.11.patch {color:green}SUCCESS:{color} +1 due to 10 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16682 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=273) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18092/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18092/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18092/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12975182 - PreCommit-HIVE-Build > Hive Metastore Translation: Add API call to tell client why table has limited > access > > > Key: HIVE-21838 > URL: https://issues.apache.org/jira/browse/HIVE-21838 > Project: Hive > Issue Type: Sub-task >Reporter: Yongzhi Chen >Assignee: Naveen Gangam >Priority: Major > Attachments: HIVE-21838.10.patch, HIVE-21838.11.patch, > HIVE-21838.2.patch, HIVE-21838.3.patch, HIVE-21838.4.patch, > HIVE-21838.5.patch, HIVE-21838.6.patch, HIVE-21838.7.patch, > HIVE-21838.8.patch, HIVE-21838.9.patch, HIVE-21838.patch > > > When a table access type is Read-only or None, we need a way to tell clients > why. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-22009) CTLV with user specified location is not honoured
[ https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888391#comment-16888391 ] Hive QA commented on HIVE-22009: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975183/HIVE-22009-branch-3.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18093/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18093/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18093/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12975183/HIVE-22009-branch-3.1.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12975183 - PreCommit-HIVE-Build > CTLV with user specified location is not honoured > -- > > Key: HIVE-22009 > URL: https://issues.apache.org/jira/browse/HIVE-22009 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.1.1 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22009-branch-3.1.patch, HIVE-22009.patch > > > Steps to repro : > > {code:java} > CREATE TABLE emp_table (id int, name string, salary int); > insert into emp_table values(1,'a',2); > CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1; > CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION > '/tmp/emp_ext_table'; > show create table emp_ext_table;{code} > > {code:java} > ++ > | createtab_stmt | > ++ > | CREATE EXTERNAL TABLE `emp_ext_table`( | > | `id` int, | > | `name` string, | > | `salary` int) | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' | > | STORED AS INPUTFORMAT | > | 'org.apache.hadoop.mapred.TextInputFormat' | > | OUTPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' | > | LOCATION | > | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' | > | TBLPROPERTIES ( | > | 'bucketing_version'='2', | > | 'transient_lastDdlTime'='1563467962') | > ++{code} > Table Location is not '/tmp/emp_ext_table', instead location is set to > default warehouse path. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888393#comment-16888393 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975184/HIVE-21637.38.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18094/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18094/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18094/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12975184/HIVE-21637.38.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12975184 - PreCommit-HIVE-Build > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.4.patch, HIVE-21637.5.patch, > HIVE-21637.6.patch, HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888394#comment-16888394 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975184/HIVE-21637.38.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18095/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18095/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18095/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12975184/HIVE-21637.38.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12975184 - PreCommit-HIVE-Build > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.4.patch, HIVE-21637.5.patch, > HIVE-21637.6.patch, HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.39.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.39.patch, HIVE-21637.4.patch, > HIVE-21637.5.patch, HIVE-21637.6.patch, HIVE-21637.7.patch, > HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-21225: Status: Open (was: Patch Available) > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.10.patch, > HIVE-21225.11.patch, HIVE-21225.12.patch, HIVE-21225.13.patch, > HIVE-21225.14.patch, HIVE-21225.15.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, > HIVE-21225.7.patch, HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Assigned] (HIVE-22010) Clean up ShowCreateTableOperation
[ https://issues.apache.org/jira/browse/HIVE-22010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely reassigned HIVE-22010: - > Clean up ShowCreateTableOperation > - > > Key: HIVE-22010 > URL: https://issues.apache.org/jira/browse/HIVE-22010 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22010) Clean up ShowCreateTableOperation
[ https://issues.apache.org/jira/browse/HIVE-22010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-22010: -- Status: Patch Available (was: Open) > Clean up ShowCreateTableOperation > - > > Key: HIVE-22010 > URL: https://issues.apache.org/jira/browse/HIVE-22010 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Attachments: HIVE-22010.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-22010) Clean up ShowCreateTableOperation
[ https://issues.apache.org/jira/browse/HIVE-22010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-22010: -- Attachment: HIVE-22010.01.patch > Clean up ShowCreateTableOperation > - > > Key: HIVE-22010 > URL: https://issues.apache.org/jira/browse/HIVE-22010 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Attachments: HIVE-22010.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
[ https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888400#comment-16888400 ] Hive QA commented on HIVE-19113: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 34s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 9s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 45s{color} | {color:red} ql: The patch generated 4 new + 472 unchanged - 4 fixed = 476 total (was 476) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} common in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 28s{color} | {color:green} ql generated 0 new + 2249 unchanged - 1 fixed = 2249 total (was 2250) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-18096/dev-support/hive-personality.sh | | git revision | master / 374f361 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-18096/yetus/diff-checkstyle-ql.txt | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-18096/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified > -- > > Key: HIVE-19113 > URL: https://issues.apache.org/jira/browse/HIVE-19113 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19113.patch > > > The user's expectation of > "create external table bucketed (key int) clustered by (key) into 4 buckets > stored as orc;" > is that the table will cluster the key into 4 buckets, while the file layout > does not do any actual clustering of rows. > In the absence of a "SORTED BY", this can automatically do a "SORTED BY > (key)" to clu
[jira] [Commented] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
[ https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888409#comment-16888409 ] Hive QA commented on HIVE-19113: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975197/HIVE-19113.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 63 failed/errored test(s), 16681 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_11] (batchId=295) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions] (batchId=298) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions] (batchId=298) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid2] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part2] (batchId=64) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_static_ptn_into_bucketed_table] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_memcheck] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=66) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_11] (batchId=2) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_21] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats10] (batchId=24) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_bucket] (batchId=29) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=183) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket2] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket3] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_many] (batchId=178) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_num_reducers2] (batchId=181) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_num_reducers] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[check_constraint] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[disable_merge_for_bucketing] (batchId=182) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction_3] (batchId=181) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_into_default_keyword] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=178) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[load_dyn_part2] (batchId=176) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[murmur_hash_migration] (batchId=177) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sample10_mm] (batchId=176) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[semijoin_hint] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge] (batchId=183) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sqlmerge_stats] (batchId=182) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_fixed_bucket_pruning] (batchId=182) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_bucket] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_insert_into_bucketed_table] (batchId=166) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[disable_merge_for_bucketing] (batchId=195) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[acid_vectorization_original_tez] (batchId=111) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket2] (batchId=135) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket3] (batchId=119) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[disable_merge_for_bucketing] (batchId=148) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[dynpart_sort_optimization] (batchId=136) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[load_d
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888431#comment-16888431 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975200/HIVE-21637.39.patch {color:green}SUCCESS:{color} +1 due to 123 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 16675 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=110) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_table_constraint_invalid_fk_tbl1] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_table_constraint_invalid_pk_tbl] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[drop_invalid_constraint2] (batchId=100) org.apache.hadoop.hive.ql.TestTxnCommands.testParallelInsertAnalyzeStats (batchId=349) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosExternalTable (batchId=255) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosWithDmlOperations (batchId=255) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosWithReplacePolicyDMLOperattion (batchId=255) org.apache.hadoop.hive.ql.stats.TestStatsUpdaterThread.testTxnTable (batchId=317) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18098/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18098/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18098/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12975200 - PreCommit-HIVE-Build > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.39.patch, HIVE-21637.4.patch, > HIVE-21637.5.patch, HIVE-21637.6.patch, HIVE-21637.7.patch, > HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888432#comment-16888432 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975200/HIVE-21637.39.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18099/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18099/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18099/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12975200/HIVE-21637.39.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12975200 - PreCommit-HIVE-Build > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.39.patch, HIVE-21637.4.patch, > HIVE-21637.5.patch, HIVE-21637.6.patch, HIVE-21637.7.patch, > HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888437#comment-16888437 ] Hive QA commented on HIVE-21637: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 52s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 31s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} storage-api in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 33s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 9s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 14s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} beeline in master has 44 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} standalone-metastore/metastore-tools/metastore-benchmarks in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 47s{color} | {color:blue} itests/util in master has 44 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 26s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} storage-api: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} standalone-metastore/metastore-common: The patch generated 10 new + 496 unchanged - 4 fixed = 506 total (was 500) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 177 new + 2232 unchanged - 65 fixed = 2409 total (was 2297) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 4s{color} | {color:red} ql: The patch generated 82 new + 2262 unchanged - 32 fixed = 2344 total (was 2294) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} standalone-metastore/metastore-tools/tools-common: The patch generated 5 new + 31 unchanged - 0 fixed = 36 total (was 31) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} itests/hcata
[jira] [Commented] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
[ https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888446#comment-16888446 ] Gopal V commented on HIVE-19113: Interesting side-effect {code} - Statistics: Num rows: 4200 Data size: 1253037 Basic stats: COMPLETE Column stats: PARTIAL + Statistics: Num rows: 4200 Data size: 1247197 Basic stats: COMPLETE Column stats: PARTIAL {code} The ORC files got smaller after this change. > Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified > -- > > Key: HIVE-19113 > URL: https://issues.apache.org/jira/browse/HIVE-19113 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19113.patch > > > The user's expectation of > "create external table bucketed (key int) clustered by (key) into 4 buckets > stored as orc;" > is that the table will cluster the key into 4 buckets, while the file layout > does not do any actual clustering of rows. > In the absence of a "SORTED BY", this can automatically do a "SORTED BY > (key)" to cluster the keys within the file as expected. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Comment Edited] (HIVE-19113) Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified
[ https://issues.apache.org/jira/browse/HIVE-19113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888446#comment-16888446 ] Gopal V edited comment on HIVE-19113 at 7/19/19 2:14 AM: - Interesting side-effect {code} - Statistics: Num rows: 4200 Data size: 1253037 Basic stats: COMPLETE Column stats: PARTIAL + Statistics: Num rows: 4200 Data size: 1247197 Basic stats: COMPLETE Column stats: PARTIAL {code} The ORC files got smaller after this change. was (Author: gopalv): Interesting side-effect {code} - Statistics: Num rows: 4200 Data size: 1253037 Basic stats: COMPLETE Column stats: PARTIAL + Statistics: Num rows: 4200 Data size: 1247197 Basic stats: COMPLETE Column stats: PARTIAL {code} The ORC files got smaller after this change. > Bucketing: Make CLUSTERED BY do CLUSTER BY if no explicit sorting is specified > -- > > Key: HIVE-19113 > URL: https://issues.apache.org/jira/browse/HIVE-19113 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19113.patch > > > The user's expectation of > "create external table bucketed (key int) clustered by (key) into 4 buckets > stored as orc;" > is that the table will cluster the key into 4 buckets, while the file layout > does not do any actual clustering of rows. > In the absence of a "SORTED BY", this can automatically do a "SORTED BY > (key)" to cluster the keys within the file as expected. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-22010) Clean up ShowCreateTableOperation
[ https://issues.apache.org/jira/browse/HIVE-22010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888448#comment-16888448 ] Hive QA commented on HIVE-22010: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 6s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 5 new + 23 unchanged - 2 fixed = 28 total (was 25) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 15s{color} | {color:red} ql generated 1 new + 2250 unchanged - 0 fixed = 2251 total (was 2250) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Format string should use %n rather than n in org.apache.hadoop.hive.ql.ddl.table.creation.ShowCreateTableOperation.getCreateTableCommand(Table) At ShowCreateTableOperation.java:rather than n in org.apache.hadoop.hive.ql.ddl.table.creation.ShowCreateTableOperation.getCreateTableCommand(Table) At ShowCreateTableOperation.java:[line 123] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-18100/dev-support/hive-personality.sh | | git revision | master / 374f361 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-18100/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-18100/yetus/whitespace-eol.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-18100/yetus/new-findbugs-ql.html | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-18100/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Clean up ShowCreateTableOperation > - > > Key: HIVE-22010 > URL: https://issues.apache.org/jira/browse/HIVE-22010 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Attachments: HIVE-22010.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.40.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.39.patch, HIVE-21637.4.patch, > HIVE-21637.40.patch, HIVE-21637.5.patch, HIVE-21637.6.patch, > HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-22010) Clean up ShowCreateTableOperation
[ https://issues.apache.org/jira/browse/HIVE-22010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888468#comment-16888468 ] Hive QA commented on HIVE-22010: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975202/HIVE-22010.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 16681 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_create_table_alter] (batchId=32) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.primaryKeyAndForeignKey (batchId=222) org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty (batchId=232) org.apache.hadoop.hive.ql.parse.TestReplWithJsonMessageFormat.testDumpWithPartitionDirMissing (batchId=251) org.apache.hadoop.hive.ql.parse.TestReplWithJsonMessageFormat.testDumpWithTableDirMissing (batchId=251) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDumpWithPartitionDirMissing (batchId=260) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testDumpWithTableDirMissing (batchId=260) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18100/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18100/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18100/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 20 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12975202 - PreCommit-HIVE-Build > Clean up ShowCreateTableOperation > - > > Key: HIVE-22010 > URL: https://issues.apache.org/jira/browse/HIVE-22010 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Attachments: HIVE-22010.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-13004) Remove encryption shims
[ https://issues.apache.org/jira/browse/HIVE-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888470#comment-16888470 ] Yuming Wang commented on HIVE-13004: Any update? > Remove encryption shims > --- > > Key: HIVE-13004 > URL: https://issues.apache.org/jira/browse/HIVE-13004 > Project: Hive > Issue Type: Task > Components: Encryption >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan >Priority: Major > Attachments: HIVE-13004.1.patch, HIVE-13004.2.patch, HIVE-13004.patch > > > It has served its purpose. Now that we don't support hadoop-1, its no longer > needed. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.41.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.39.patch, HIVE-21637.4.patch, > HIVE-21637.40.patch, HIVE-21637.41.patch, HIVE-21637.5.patch, > HIVE-21637.6.patch, HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888511#comment-16888511 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975214/HIVE-21637.40.patch {color:green}SUCCESS:{color} +1 due to 123 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 16675 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty (batchId=232) org.apache.hadoop.hive.ql.TestTxnCommands.testParallelInsertAnalyzeStats (batchId=349) org.apache.hadoop.hive.ql.TestTxnCommands.testParallelTruncateAnalyzeStats (batchId=349) org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testParallelTruncateAnalyzeStats (batchId=331) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosExternalTable (batchId=255) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosWithDmlOperations (batchId=255) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosWithReplacePolicyDMLOperattion (batchId=255) org.apache.hadoop.hive.ql.stats.TestStatsUpdaterThread.testTxnTable (batchId=317) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18101/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18101/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18101/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 21 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12975214 - PreCommit-HIVE-Build > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.39.patch, HIVE-21637.4.patch, > HIVE-21637.40.patch, HIVE-21637.41.patch, HIVE-21637.5.patch, > HIVE-21637.6.patch, HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-22004) Non-acid to acid conversion doesn't handle random filenames
[ https://issues.apache.org/jira/browse/HIVE-22004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888515#comment-16888515 ] Aditya Shah commented on HIVE-22004: [~owen.omalley] [~ekoifman] [~vgumashta] [~vgarg] Can you please take a look and guide me for this? > Non-acid to acid conversion doesn't handle random filenames > --- > > Key: HIVE-22004 > URL: https://issues.apache.org/jira/browse/HIVE-22004 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Aditya Shah >Priority: Major > > Right now the supported filename patterns for non-acid to acid table's files > (original files) are the only ones created by Hive itself (eg. 00, > 00_COPY_1, bucket_0, etc). But at the same time Hive non-acid table > supports reading from tables having files with random filenames. We should > support the same for acid tables. > A way to handle this would be to rename such files and though rename is not a > costly operation for HDFS, But for non-acid tables with the location on a > blobstore like s3 and having random filenames will have costly added steps to > convert to acid. > Current scenario: What we do now for original files is assign them a logical > bucket id and for unrecognized patterns we assign -1 and ignore those files. > Proposed alternatives: > 1) For all the random files assume the logical bucket id as 0 and let the > files belong to the same bucket in the way similar to we do for multiple > files with same bucket id (_copy_N). > 2) For all the random files lexicographically sort them and sequentially > assign them a bucket id similar to the handling of multiple files for a > non-bucketed table where we extract the bucket id simply from filenames -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888519#comment-16888519 ] Hive QA commented on HIVE-21637: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 37s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 23s{color} | {color:blue} storage-api in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 30s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 17s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 7s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} beeline in master has 44 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 26s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 26s{color} | {color:blue} standalone-metastore/metastore-tools/metastore-benchmarks in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 46s{color} | {color:blue} itests/util in master has 44 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 25s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} storage-api: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} standalone-metastore/metastore-common: The patch generated 10 new + 496 unchanged - 4 fixed = 506 total (was 500) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 177 new + 2232 unchanged - 65 fixed = 2409 total (was 2297) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s{color} | {color:red} ql: The patch generated 82 new + 2262 unchanged - 32 fixed = 2344 total (was 2294) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} standalone-metastore/metastore-tools/tools-common: The patch generated 5 new + 31 unchanged - 0 fixed = 36 total (was 31) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} itests/hcata
[jira] [Updated] (HIVE-22009) CTLV with user specified location is not honoured
[ https://issues.apache.org/jira/browse/HIVE-22009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22009: -- Attachment: HIVE-22009.1.patch > CTLV with user specified location is not honoured > -- > > Key: HIVE-22009 > URL: https://issues.apache.org/jira/browse/HIVE-22009 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.1.1 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22009-branch-3.1.patch, HIVE-22009.1.patch, > HIVE-22009.patch > > > Steps to repro : > > {code:java} > CREATE TABLE emp_table (id int, name string, salary int); > insert into emp_table values(1,'a',2); > CREATE VIEW emp_view AS SELECT * FROM emp_table WHERE salary>1; > CREATE EXTERNAL TABLE emp_ext_table like emp_view LOCATION > '/tmp/emp_ext_table'; > show create table emp_ext_table;{code} > > {code:java} > ++ > | createtab_stmt | > ++ > | CREATE EXTERNAL TABLE `emp_ext_table`( | > | `id` int, | > | `name` string, | > | `salary` int) | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' | > | STORED AS INPUTFORMAT | > | 'org.apache.hadoop.mapred.TextInputFormat' | > | OUTPUTFORMAT | > | 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' | > | LOCATION | > | 'hdfs://nn:8020/warehouse/tablespace/external/hive/emp_ext_table' | > | TBLPROPERTIES ( | > | 'bucketing_version'='2', | > | 'transient_lastDdlTime'='1563467962') | > ++{code} > Table Location is not '/tmp/emp_ext_table', instead location is set to > default warehouse path. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888546#comment-16888546 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975219/HIVE-21637.41.patch {color:green}SUCCESS:{color} +1 due to 123 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 16676 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_update_status] (batchId=99) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_update_status] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_update_status_disable_bitvector] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explain_locks] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock1] (batchId=8) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock2] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock3] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock4] (batchId=57) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_wide_table] (batchId=96) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_2_exim_basic] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_3_exim_metadata] (batchId=63) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part] (batchId=52) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create] (batchId=180) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_describe] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sample10_mm] (batchId=176) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into1] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into2] (batchId=102) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into3] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into4] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg1] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg2] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg3] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg4] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg5] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg_query_tbl_in_locked_db] (batchId=102) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg_try_db_lock_conflict] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg_try_drop_locked_db] (batchId=101) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg_try_lock_db_in_use] (batchId=101) org.apache.hadoop.hive.ql.TestTxnCommands.testParallelInsertAnalyzeStats (batchId=349) org.apache.hadoop.hive.ql.TestTxnCommands.testParallelTruncateAnalyzeStats (batchId=349) org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testParallelTruncateAnalyzeStats (batchId=331) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosExternalTable (batchId=255) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosWithDmlOperations (batchId=255) org.apache.hadoop.hive.ql.parse.TestTableLevelReplicationScenarios.testRenameTableScenariosWithReplacePolicyDMLOperattion (batchId=255) org.apache.hadoop.hive.ql.stats.TestStatsUpdaterThread.testTxnTable (batchId=317) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18102/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18102/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18102/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 37 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12975219 - PreCommit-HIVE-Build > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/b
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888548#comment-16888548 ] Hive QA commented on HIVE-21637: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 33s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 22s{color} | {color:blue} storage-api in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 30s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 12s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 11s{color} | {color:blue} ql in master has 2250 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} beeline in master has 44 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} standalone-metastore/metastore-tools/metastore-benchmarks in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 37s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 45s{color} | {color:blue} itests/util in master has 44 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} storage-api: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} standalone-metastore/metastore-common: The patch generated 10 new + 496 unchanged - 4 fixed = 506 total (was 500) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 177 new + 2232 unchanged - 65 fixed = 2409 total (was 2297) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 3s{color} | {color:red} ql: The patch generated 82 new + 2262 unchanged - 32 fixed = 2344 total (was 2294) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} standalone-metastore/metastore-tools/tools-common: The patch generated 5 new + 31 unchanged - 0 fixed = 36 total (was 31) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} itests/hcata
[jira] [Commented] (HIVE-13004) Remove encryption shims
[ https://issues.apache.org/jira/browse/HIVE-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888549#comment-16888549 ] Hive QA commented on HIVE-13004: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12890192/HIVE-13004.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18103/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18103/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18103/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2019-07-19 05:58:59.932 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-18103/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2019-07-19 05:58:59.940 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 374f361 HIVE-21912: Implement BlacklistingLlapMetricsListener (Peter Vary reviewed by Oliver Draese and Adam Szita) + git clean -f -d Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 374f361 HIVE-21912: Implement BlacklistingLlapMetricsListener (Peter Vary reviewed by Oliver Draese and Adam Szita) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2019-07-19 05:59:01.940 + rm -rf ../yetus_PreCommit-HIVE-Build-18103 + mkdir ../yetus_PreCommit-HIVE-Build-18103 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-18103 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-18103/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderEncryption.java: does not exist in index error: a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java: does not exist in index error: a/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/SessionHiveMetaStoreClient.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorFactory.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/processors/CryptoProcessor.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java: does not exist in index error: a/shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java: does not exist in index error: a/shims/common/src/main/java/org/apache/hadoop/hive/io/HdfsUtils.java: does not exist in index error: a/shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java: does not exist in index error: patch failed: hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderEncryption.java:91 Falling back to three-way merge... Applied patch to 'hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoaderEncryption.java' cleanly. error: patch failed: itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java:38 Falling back to three-way merge... Applied patch to 'itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java' with conflicts. error: metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java: does not exi
[jira] [Commented] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16888550#comment-16888550 ] Hive QA commented on HIVE-21637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12975219/HIVE-21637.41.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/18104/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18104/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18104/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12975219/HIVE-21637.41.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12975219 - PreCommit-HIVE-Build > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.39.patch, HIVE-21637.4.patch, > HIVE-21637.40.patch, HIVE-21637.41.patch, HIVE-21637.5.patch, > HIVE-21637.6.patch, HIVE-21637.7.patch, HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.42.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.22.patch, HIVE-21637.23.patch, > HIVE-21637.24.patch, HIVE-21637.25.patch, HIVE-21637.26.patch, > HIVE-21637.27.patch, HIVE-21637.28.patch, HIVE-21637.29.patch, > HIVE-21637.3.patch, HIVE-21637.30.patch, HIVE-21637.31.patch, > HIVE-21637.32.patch, HIVE-21637.33.patch, HIVE-21637.34.patch, > HIVE-21637.35.patch, HIVE-21637.36.patch, HIVE-21637.37.patch, > HIVE-21637.38.patch, HIVE-21637.39.patch, HIVE-21637.4.patch, > HIVE-21637.40.patch, HIVE-21637.41.patch, HIVE-21637.42.patch, > HIVE-21637.5.patch, HIVE-21637.6.patch, HIVE-21637.7.patch, > HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.14#76016)