[jira] [Updated] (HIVE-19269) Vectorization: Turn On by Default
[ https://issues.apache.org/jira/browse/HIVE-19269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19269: Attachment: HIVE-19269.06-branch-3.patch > Vectorization: Turn On by Default > - > > Key: HIVE-19269 > URL: https://issues.apache.org/jira/browse/HIVE-19269 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19269.01.patch, HIVE-19269.02.patch, > HIVE-19269.04.patch, HIVE-19269.05.patch, HIVE-19269.06-branch-3.patch, > HIVE-19269.06.patch > > > Reflect that our most expected Hive deployment will be using vectorization > and change the default of hive.vectorized.execution.enabled to true. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19341) contrib qtest shows exception in the clean up scripts
[ https://issues.apache.org/jira/browse/HIVE-19341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haifeng Chen reassigned HIVE-19341: --- Assignee: Haifeng Chen > contrib qtest shows exception in the clean up scripts > - > > Key: HIVE-19341 > URL: https://issues.apache.org/jira/browse/HIVE-19341 > Project: Hive > Issue Type: Bug > Components: Contrib >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Assignee: Haifeng Chen >Priority: Major > > In the end of the result output file, the following exception log appended. > This text will not cause the q test to fail if it was correct. > FAILED: Hive Internal Error: > org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.hive.ql.QTestUtil.cleanupFromFile(QTestUtil.java:1131) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1104) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1088) > at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:736) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:141) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.shutdown(CoreCliDriver.java:144) > at > org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:75) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) > ) > org.apache.hadoop.hive.ql.metadata.HiveException: Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.h
[jira] [Updated] (HIVE-19341) contrib qtest shows exception in the clean up scripts
[ https://issues.apache.org/jira/browse/HIVE-19341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haifeng Chen updated HIVE-19341: Attachment: HIVE-19341.01.patch > contrib qtest shows exception in the clean up scripts > - > > Key: HIVE-19341 > URL: https://issues.apache.org/jira/browse/HIVE-19341 > Project: Hive > Issue Type: Bug > Components: Contrib >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Assignee: Haifeng Chen >Priority: Major > Attachments: HIVE-19341.01.patch > > > In the end of the result output file, the following exception log appended. > This text will not cause the q test to fail if it was correct. > FAILED: Hive Internal Error: > org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.hive.ql.QTestUtil.cleanupFromFile(QTestUtil.java:1131) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1104) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1088) > at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:736) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:141) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.shutdown(CoreCliDriver.java:144) > at > org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:75) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) > ) > org.apache.hadoop.hive.ql.metadata.HiveException: Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLin
[jira] [Updated] (HIVE-19341) contrib qtest shows exception in the clean up scripts
[ https://issues.apache.org/jira/browse/HIVE-19341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haifeng Chen updated HIVE-19341: Component/s: Testing Infrastructure > contrib qtest shows exception in the clean up scripts > - > > Key: HIVE-19341 > URL: https://issues.apache.org/jira/browse/HIVE-19341 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure, Tests >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Assignee: Haifeng Chen >Priority: Major > Attachments: HIVE-19341.01.patch > > > In the end of the result output file, the following exception log appended. > This text will not cause the q test to fail if it was correct. > FAILED: Hive Internal Error: > org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.hive.ql.QTestUtil.cleanupFromFile(QTestUtil.java:1131) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1104) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1088) > at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:736) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:141) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.shutdown(CoreCliDriver.java:144) > at > org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:75) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) > ) > org.apache.hadoop.hive.ql.metadata.HiveException: Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hiv
[jira] [Updated] (HIVE-19341) contrib qtest shows exception in the clean up scripts
[ https://issues.apache.org/jira/browse/HIVE-19341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haifeng Chen updated HIVE-19341: Component/s: (was: Contrib) Tests > contrib qtest shows exception in the clean up scripts > - > > Key: HIVE-19341 > URL: https://issues.apache.org/jira/browse/HIVE-19341 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure, Tests >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Assignee: Haifeng Chen >Priority: Major > Attachments: HIVE-19341.01.patch > > > In the end of the result output file, the following exception log appended. > This text will not cause the q test to fail if it was correct. > FAILED: Hive Internal Error: > org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.hive.ql.QTestUtil.cleanupFromFile(QTestUtil.java:1131) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1104) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1088) > at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:736) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:141) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.shutdown(CoreCliDriver.java:144) > at > org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:75) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) > ) > org.apache.hadoop.hive.ql.metadata.HiveException: Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at or
[jira] [Updated] (HIVE-19341) qtest shows exception in the clean up scripts
[ https://issues.apache.org/jira/browse/HIVE-19341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haifeng Chen updated HIVE-19341: Summary: qtest shows exception in the clean up scripts (was: contrib qtest shows exception in the clean up scripts) > qtest shows exception in the clean up scripts > - > > Key: HIVE-19341 > URL: https://issues.apache.org/jira/browse/HIVE-19341 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure, Tests >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Assignee: Haifeng Chen >Priority: Major > Attachments: HIVE-19341.01.patch > > > In the end of the result output file, the following exception log appended. > This text will not cause the q test to fail if it was correct. > FAILED: Hive Internal Error: > org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.hive.ql.QTestUtil.cleanupFromFile(QTestUtil.java:1131) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1104) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1088) > at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:736) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:141) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.shutdown(CoreCliDriver.java:144) > at > org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:75) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) > ) > org.apache.hadoop.hive.ql.metadata.HiveException: Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.Cli
[jira] [Updated] (HIVE-19341) qtest shows Enforce read-only exception in the clean up scripts
[ https://issues.apache.org/jira/browse/HIVE-19341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haifeng Chen updated HIVE-19341: Summary: qtest shows Enforce read-only exception in the clean up scripts (was: qtest shows exception in the clean up scripts) > qtest shows Enforce read-only exception in the clean up scripts > --- > > Key: HIVE-19341 > URL: https://issues.apache.org/jira/browse/HIVE-19341 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure, Tests >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Assignee: Haifeng Chen >Priority: Major > Attachments: HIVE-19341.01.patch > > > In the end of the result output file, the following exception log appended. > This text will not cause the q test to fail if it was correct. > FAILED: Hive Internal Error: > org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.hive.ql.QTestUtil.cleanupFromFile(QTestUtil.java:1131) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1104) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1088) > at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:736) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:141) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.shutdown(CoreCliDriver.java:144) > at > org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:75) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) > ) > org.apache.hadoop.hive.ql.metadata.HiveException: Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.
[jira] [Updated] (HIVE-19341) qtest shows cannot overwrite read-only table exception in the clean up scripts
[ https://issues.apache.org/jira/browse/HIVE-19341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haifeng Chen updated HIVE-19341: Summary: qtest shows cannot overwrite read-only table exception in the clean up scripts (was: qtest shows Enforce read-only exception in the clean up scripts) > qtest shows cannot overwrite read-only table exception in the clean up scripts > -- > > Key: HIVE-19341 > URL: https://issues.apache.org/jira/browse/HIVE-19341 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure, Tests >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Assignee: Haifeng Chen >Priority: Major > Attachments: HIVE-19341.01.patch > > > In the end of the result output file, the following exception log appended. > This text will not cause the q test to fail if it was correct. > FAILED: Hive Internal Error: > org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.hive.ql.QTestUtil.cleanupFromFile(QTestUtil.java:1131) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1104) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1088) > at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:736) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:141) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.shutdown(CoreCliDriver.java:144) > at > org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:75) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) > ) > org.apache.hadoop.hive.ql.metadata.HiveException: Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
[jira] [Updated] (HIVE-19341) qtest shows cannot overwrite read-only table exception in the clean up scripts
[ https://issues.apache.org/jira/browse/HIVE-19341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haifeng Chen updated HIVE-19341: Status: Patch Available (was: Open) Based on the recent changes of load testing tables, the drop table statements for these tables caused exception in output results. Drop table statements for these tables are no longer needed and thus removed by this patch. > qtest shows cannot overwrite read-only table exception in the clean up scripts > -- > > Key: HIVE-19341 > URL: https://issues.apache.org/jira/browse/HIVE-19341 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure, Tests >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Assignee: Haifeng Chen >Priority: Major > Attachments: HIVE-19341.01.patch > > > In the end of the result output file, the following exception log appended. > This text will not cause the q test to fail if it was correct. > FAILED: Hive Internal Error: > org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.hive.ql.QTestUtil.cleanupFromFile(QTestUtil.java:1131) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1104) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1088) > at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:736) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:141) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.shutdown(CoreCliDriver.java:144) > at > org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:75) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) > ) > org.apache.hadoop.hive.ql.metadata.HiveException: Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(R
[jira] [Commented] (HIVE-19054) Function replication shall use "hive.repl.replica.functions.root.dir" as root
[ https://issues.apache.org/jira/browse/HIVE-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456046#comment-16456046 ] Hive QA commented on HIVE-19054: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920659/HIVE-19054.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 50 failed/errored test(s), 14284 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=54) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=80) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=173) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=183) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[acid_vectorization_original_tez] (batchId=106) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.hcatalog.pig.TestParquetHCatLoader.testReadPartitionedBasic (batchId=196) org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteTimestamp (batchId=196) org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteTinyint (batchId=196) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill (batchId=242) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testTokenAuth (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testCancelRenewTokenFlow (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testConnectio
[jira] [Updated] (HIVE-19161) Add authorizations to information schema
[ https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19161: -- Attachment: HIVE-19161.10.patch > Add authorizations to information schema > > > Key: HIVE-19161 > URL: https://issues.apache.org/jira/browse/HIVE-19161 > Project: Hive > Issue Type: Improvement >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19161.1.patch, HIVE-19161.10.patch, > HIVE-19161.2.patch, HIVE-19161.3.patch, HIVE-19161.4.patch, > HIVE-19161.5.patch, HIVE-19161.6.patch, HIVE-19161.7.patch, > HIVE-19161.8.patch, HIVE-19161.9.patch > > > We need to control the access of information schema so user can only query > the information authorized to. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19161) Add authorizations to information schema
[ https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456047#comment-16456047 ] Daniel Dai commented on HIVE-19161: --- HIVE-19161.10.patch adding capacity to ship required jars in jdbc storage handler. > Add authorizations to information schema > > > Key: HIVE-19161 > URL: https://issues.apache.org/jira/browse/HIVE-19161 > Project: Hive > Issue Type: Improvement >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19161.1.patch, HIVE-19161.10.patch, > HIVE-19161.2.patch, HIVE-19161.3.patch, HIVE-19161.4.patch, > HIVE-19161.5.patch, HIVE-19161.6.patch, HIVE-19161.7.patch, > HIVE-19161.8.patch, HIVE-19161.9.patch > > > We need to control the access of information schema so user can only query > the information authorized to. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19341) qtest shows cannot overwrite read-only table exception in the clean up scripts
[ https://issues.apache.org/jira/browse/HIVE-19341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456054#comment-16456054 ] Haifeng Chen commented on HIVE-19341: - Hi [~abstractdog] Would you please have review on this patch related to recent qtest src table changes? > qtest shows cannot overwrite read-only table exception in the clean up scripts > -- > > Key: HIVE-19341 > URL: https://issues.apache.org/jira/browse/HIVE-19341 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure, Tests >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Assignee: Haifeng Chen >Priority: Major > Attachments: HIVE-19341.01.patch > > > In the end of the result output file, the following exception log appended. > This text will not cause the q test to fail if it was correct. > FAILED: Hive Internal Error: > org.apache.hadoop.hive.ql.metadata.HiveException(Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at org.apache.hadoop.hive.ql.QTestUtil.cleanupFromFile(QTestUtil.java:1131) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1104) > at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:1088) > at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:736) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:141) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$6.invokeInternal(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.shutdown(CoreCliDriver.java:144) > at > org.apache.hadoop.hive.cli.control.CliAdapter$1$1.evaluate(CliAdapter.java:75) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) > at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) > ) > org.apache.hadoop.hive.ql.metadata.HiveException: Error while invoking > PreHook. hooks: java.lang.RuntimeException: Cannot overwrite read-only table: > src_thrift > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:63) > at > org.apache.hadoop.hive.ql.hooks.EnforceReadOnlyTables.run(EnforceReadOnlyTables.java:43) > at > org.apache.hadoop.hive.ql.HookRunner.invokeGeneralHook(HookRunner.java:296) > at org.apache.hadoop.hive.ql.HookRunner.runPreHooks(HookRunner.java:273) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2086) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1825) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1568) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1562) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) >
[jira] [Updated] (HIVE-19311) Partition and bucketing support for “load data” statement
[ https://issues.apache.org/jira/browse/HIVE-19311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-19311: -- Attachment: HIVE-19311.3.patch > Partition and bucketing support for “load data” statement > - > > Key: HIVE-19311 > URL: https://issues.apache.org/jira/browse/HIVE-19311 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19311.1.patch, HIVE-19311.2.patch, > HIVE-19311.3.patch > > > Currently, "load data" statement is very limited. It errors out if any of the > information is missing such as partitioning info if table is partitioned or > appropriate names when table is bucketed. > It should be able to launch an insert job to load the data instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19161) Add authorizations to information schema
[ https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456098#comment-16456098 ] Hive QA commented on HIVE-19161: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 32s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 21s{color} | {color:red} hcatalog-unit in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 36s{color} | {color:red} hive-unit in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 4s{color} | {color:red} ql in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 21s{color} | {color:red} service in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 37s{color} | {color:red} hive-unit in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 24s{color} | {color:red} service in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 37s{color} | {color:red} hive-unit in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s{color} | {color:red} service in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} itests/hive-unit: The patch generated 5 new + 2 unchanged - 0 fixed = 7 total (was 2) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s{color} | {color:red} jdbc-handler: The patch generated 4 new + 21 unchanged - 0 fixed = 25 total (was 21) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 1 new + 137 unchanged - 0 fixed = 138 total (was 137) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s{color} | {color:red} standalone-metastore: The patch generated 1 new + 1515 unchanged - 1 fixed = 1516 total (was 1516) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 27 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s{color} | {color:red} service generated 1 new + 40 unchanged - 0 fixed = 41 total (was 40) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 4s{color} | {color:red} standalone-metastore generated 1 new + 55 unchanged - 0 fixed = 56 total (was 55) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 35m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10521/dev-support/hive-personality.sh | | git revision | master / 0dec5
[jira] [Commented] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456108#comment-16456108 ] Adrian Woodhead commented on HIVE-18767: [~pvary] thanks for kicking that off, it ran but the build is failing when applying the patch. I think it's trying to apply the patch to master which is Hive 3.x while the patch is intended for Hive 2.3.x. I saw the code had moved around in 3.x so it makes sense that it can't apply it to master. [~pvary] how do we proceed if we want this applied to Hive 2.x? > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Attachments: HIVE-18767.1.patch, HIVE-18767.2.patch, > HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19161) Add authorizations to information schema
[ https://issues.apache.org/jira/browse/HIVE-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456120#comment-16456120 ] Hive QA commented on HIVE-19161: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920660/HIVE-19161.9.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 48 failed/errored test(s), 14285 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=54) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=80) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=173) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[jdbc_handler] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[acid_vectorization_original_tez] (batchId=106) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning (batchId=309) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=241) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testTokenAuth (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testCancelRenewTokenFlow (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testConnection (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testIsValid (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testIsValidNeg (batchId=254) org.apache.hiv
[jira] [Commented] (HIVE-19320) MapRedLocalTask is printing child log to stderr and stdout
[ https://issues.apache.org/jira/browse/HIVE-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456123#comment-16456123 ] Peter Vary commented on HIVE-19320: --- [~aihuaxu]: [~Yibing] and [~zsombor.klara] changed the following lines in HIVE-17078: https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java#L332 I think/hope after this patch the local task outputs are redirected to the HS2 logs too. > MapRedLocalTask is printing child log to stderr and stdout > -- > > Key: HIVE-19320 > URL: https://issues.apache.org/jira/browse/HIVE-19320 > Project: Hive > Issue Type: Sub-task > Components: Logging >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Priority: Major > > In this line, local child MR task is printing the logs to stderr and stdout. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java#L341 > stderr/stdout should capture the service running log rather than the query > execution output. Those should be reasonable to go to HS2 log and propagate > to beeline console. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19328) Some error messages like "table not found" are printing to STDERR
[ https://issues.apache.org/jira/browse/HIVE-19328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456131#comment-16456131 ] Peter Vary commented on HIVE-19328: --- Some time ago I did extensive work on LogHelper (console), and wrote some comments there. It might help to find out more details about how it currently works: [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java#L1056] {quote} * Logs info into the log file, and if not silent then into the HiveServer2 or HiveCli info * stream too. Handles an extra detail which will not be printed if null. * BeeLine uses the operation log file to show the logs to the user, so depending on the * BeeLine settings it could be shown to the user.{quote} > Some error messages like "table not found" are printing to STDERR > - > > Key: HIVE-19328 > URL: https://issues.apache.org/jira/browse/HIVE-19328 > Project: Hive > Issue Type: Sub-task > Components: Logging >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Priority: Major > > In Driver class, we are printing the exceptions to the log file and to the > console through LogHelper. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/Driver.java#L730 > I can see the following exceptions in the stderr. > FAILED: SemanticException [Error 10001]: Table not found default.sample_07 > If it's from HiveCli, that makes sense to print to console, while if it's > beeline talking to HS2, then such log should go to HS2 log and beeline > console. So we should differentiate these two scenarios. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19198) Few flaky hcatalog tests
[ https://issues.apache.org/jira/browse/HIVE-19198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456145#comment-16456145 ] Hive QA commented on HIVE-19198: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} hcatalog/core: The patch generated 3 new + 6 unchanged - 1 fixed = 9 total (was 7) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10522/dev-support/hive-personality.sh | | git revision | master / 0dec595 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10522/yetus/diff-checkstyle-hcatalog_core.txt | | modules | C: hcatalog/core U: hcatalog/core | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10522/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Few flaky hcatalog tests > > > Key: HIVE-19198 > URL: https://issues.apache.org/jira/browse/HIVE-19198 > Project: Hive > Issue Type: Sub-task >Reporter: Ashutosh Chauhan >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19198.1.patch, HIVE-19198.2.patch > > > TestPermsGrp : Consider removing this since hcat cli is not widely used. > TestHCatPartitionPublish.testPartitionPublish > TestHCatMultiOutputFormat.testOutputFormat -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19206) Automatic memory management for open streaming writers
[ https://issues.apache.org/jira/browse/HIVE-19206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19206: - Attachment: HIVE-19206.1.patch > Automatic memory management for open streaming writers > -- > > Key: HIVE-19206 > URL: https://issues.apache.org/jira/browse/HIVE-19206 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19206.1.patch > > > Problem: > When there are 100s of record updaters open, the amount of memory required > by orc writers keeps growing because of ORC's internal buffers. This can lead > to potential high GC or OOM during streaming ingest. > Solution: > The high level idea is for the streaming connection to remember all the open > record updaters and flush the record updater periodically (at some interval). > Records written to each record updater can be used as a metric to determine > the candidate record updaters for flushing. > If stripe size of orc file is 64MB, the default memory management check > happens only after every 5000 rows which may which may be too late when there > are too many concurrent writers in a process. Example case would be 100 > writers open and each of them have almost full stripe of 64MB buffered data, > this would take 100*64MB ~=6GB of memory. When all of the record writers > flush, the memory usage drops down to 100*~2MB which is just ~200MB memory > usage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19206) Automatic memory management for open streaming writers
[ https://issues.apache.org/jira/browse/HIVE-19206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456174#comment-16456174 ] Prasanth Jayachandran commented on HIVE-19206: -- This patch depends on HIVE-19211. - Added memory monitoring via MemoryMXBean notifications + fallback based on data size being ingested - Added various counters [~gopalv] can you please take a look? > Automatic memory management for open streaming writers > -- > > Key: HIVE-19206 > URL: https://issues.apache.org/jira/browse/HIVE-19206 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19206.1.patch > > > Problem: > When there are 100s of record updaters open, the amount of memory required > by orc writers keeps growing because of ORC's internal buffers. This can lead > to potential high GC or OOM during streaming ingest. > Solution: > The high level idea is for the streaming connection to remember all the open > record updaters and flush the record updater periodically (at some interval). > Records written to each record updater can be used as a metric to determine > the candidate record updaters for flushing. > If stripe size of orc file is 64MB, the default memory management check > happens only after every 5000 rows which may which may be too late when there > are too many concurrent writers in a process. Example case would be 100 > writers open and each of them have almost full stripe of 64MB buffered data, > this would take 100*64MB ~=6GB of memory. When all of the record writers > flush, the memory usage drops down to 100*~2MB which is just ~200MB memory > usage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19206) Automatic memory management for open streaming writers
[ https://issues.apache.org/jira/browse/HIVE-19206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456176#comment-16456176 ] Prasanth Jayachandran commented on HIVE-19206: -- [~gopalv] Please discard the RB request. RB request includes HIVE-19211 + this patch so it will look big. > Automatic memory management for open streaming writers > -- > > Key: HIVE-19206 > URL: https://issues.apache.org/jira/browse/HIVE-19206 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19206.1.patch > > > Problem: > When there are 100s of record updaters open, the amount of memory required > by orc writers keeps growing because of ORC's internal buffers. This can lead > to potential high GC or OOM during streaming ingest. > Solution: > The high level idea is for the streaming connection to remember all the open > record updaters and flush the record updater periodically (at some interval). > Records written to each record updater can be used as a metric to determine > the candidate record updaters for flushing. > If stripe size of orc file is 64MB, the default memory management check > happens only after every 5000 rows which may which may be too late when there > are too many concurrent writers in a process. Example case would be 100 > writers open and each of them have almost full stripe of 64MB buffered data, > this would take 100*64MB ~=6GB of memory. When all of the record writers > flush, the memory usage drops down to 100*~2MB which is just ~200MB memory > usage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19198) Few flaky hcatalog tests
[ https://issues.apache.org/jira/browse/HIVE-19198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456199#comment-16456199 ] Hive QA commented on HIVE-19198: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920374/HIVE-19198.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 44 failed/errored test(s), 14284 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=54) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=80) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=173) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[acid_vectorization_original_tez] (batchId=106) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testTokenAuth (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testCancelRenewTokenFlow (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testConnection (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testIsValid (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testIsValidNeg (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testNegativeProxyAuth (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testNegativeTokenAuth (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testProxyAuth (batchId=254) org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.testRenewDelegationToken (batchId=254) org.apache.hive.minikdc.T
[jira] [Commented] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456211#comment-16456211 ] Peter Vary commented on HIVE-18767: --- https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-CreatingaPatch > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Attachments: HIVE-18767.1.patch, HIVE-18767.2.patch, > HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456213#comment-16456213 ] Mass Dosage commented on HIVE-18767: Ah, OK, I missed the bit about including the branch name in the patch file name, let me try again. > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Attachments: HIVE-18767.1.patch, HIVE-18767.2.patch, > HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrian Woodhead updated HIVE-18767: --- Status: Open (was: Patch Available) > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Attachments: HIVE-18767.1.patch, HIVE-18767.2.patch, > HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18988) Support bootstrap replication of ACID tables
[ https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456228#comment-16456228 ] Hive QA commented on HIVE-18988: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} server-extensions in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 56s{color} | {color:red} ql in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch storage-api passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} The patch common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch server-extensions passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 6 unchanged - 0 fixed = 7 total (was 6) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 50s{color} | {color:red} ql: The patch generated 10 new + 760 unchanged - 7 fixed = 770 total (was 767) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} standalone-metastore: The patch generated 0 new + 1497 unchanged - 13 fixed = 1497 total (was 1510) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 27 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10524/dev-support/hive-personality.sh | | git revision | master / 0dec595 | | Default Java | 1.8.0_111 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-10524/yetus/patch-mvninstall-hcatalog_server-extensions.txt | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-10524/yetus/patch-mvninstall-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10524/yetus/diff-checkstyle-itests_hive-unit.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10524/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-10524/yetus/whitespace-eol.txt | | modules | C: storage-api common hcatalog/server-extensions itests/
[jira] [Assigned] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrian Woodhead reassigned HIVE-18767: -- Assignee: Adrian Woodhead (was: Mass Dosage) > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Adrian Woodhead >Priority: Major > Attachments: HIVE-18767.1.patch, HIVE-18767.2.patch, > HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mass Dosage reassigned HIVE-18767: -- Assignee: Mass Dosage (was: Adrian Woodhead) > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Attachments: HIVE-18767.1.patch, HIVE-18767.2.patch, > HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mass Dosage updated HIVE-18767: --- Attachment: HIVE-18767-branch-2.3.patch > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Attachments: HIVE-18767-branch-2.3.patch, HIVE-18767.1.patch, > HIVE-18767.2.patch, HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mass Dosage updated HIVE-18767: --- Fix Version/s: 2.3.3 Status: Patch Available (was: Open) > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Fix For: 2.3.3 > > Attachments: HIVE-18767-branch-2.3.patch, HIVE-18767.1.patch, > HIVE-18767.2.patch, HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18767) Some alterPartitions invocations throw 'NumberFormatException: null'
[ https://issues.apache.org/jira/browse/HIVE-18767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456235#comment-16456235 ] Mass Dosage commented on HIVE-18767: OK, I have renamed patch to "HIVE-18767-branch-2.3.patch". The documentation on branches in the Wiki is out of date but "branch-2.3" looks the most likely candidate for where the next 2.3 release will be coming from. Please correct me if I'm wrong. I did a fresh checkout of that branch and ensured that the patch applies cleanly: {code} patch -p0 --dry-run < ../HIVE-18767-branch-2.3.patch checking file metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java checking file metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreUtils.java {code} So hopefully we're good to go. > Some alterPartitions invocations throw 'NumberFormatException: null' > > > Key: HIVE-18767 > URL: https://issues.apache.org/jira/browse/HIVE-18767 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.2 >Reporter: Yuming Wang >Assignee: Mass Dosage >Priority: Major > Fix For: 2.3.3 > > Attachments: HIVE-18767-branch-2.3.patch, HIVE-18767.1.patch, > HIVE-18767.2.patch, HIVE-18767.3.patch > > > Error messages: > {noformat} > [info] Cause: java.lang.NumberFormatException: null > [info] at java.lang.Long.parseLong(Long.java:552) > [info] at java.lang.Long.parseLong(Long.java:631) > [info] at > org.apache.hadoop.hive.metastore.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:315) > [info] at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:605) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:3837) > [info] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > [info] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > [info] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [info] at java.lang.reflect.Method.invoke(Method.java:498) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [info] at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [info] at > com.sun.proxy.$Proxy23.alter_partitions_with_environment_context(Unknown > Source) > [info] at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partitions(HiveMetaStoreClient.java:1527) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19285) Add logs to the subclasses of MetaDataOperation
[ https://issues.apache.org/jira/browse/HIVE-19285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-19285: -- Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks for the patch [~kuczoram]! > Add logs to the subclasses of MetaDataOperation > --- > > Key: HIVE-19285 > URL: https://issues.apache.org/jira/browse/HIVE-19285 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Minor > Fix For: 3.1.0 > > Attachments: HIVE-19285.1.patch, HIVE-19285.2.patch, > HIVE-19285.3.patch > > > Subclasses of MetaDataOperation are not writing anything to the logs. It > would be useful to have some INFO and DEBUG level logging in these classes. > The following classes are affected > * GetCatalogsOperation > * GetColumnsOperation > * GetFunctionsOperation > * GetSchemasOperation > * GetTablesOperation > * GetTypeInfoOperation > * GetTableTypesOperation > * GetCrossReferenceOperation > * GetPrimaryKeysOperation -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18988) Support bootstrap replication of ACID tables
[ https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456257#comment-16456257 ] Hive QA commented on HIVE-18988: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920853/HIVE-18988.05.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 51 failed/errored test(s), 14286 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=54) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=80) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=173) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[acid_vectorization_original_tez] (batchId=106) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitions (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=242) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testTokenAuth (batchId=254) org.apache.hive.minikdc.TestJdbcWi
[jira] [Commented] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
[ https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456259#comment-16456259 ] Hive QA commented on HIVE-19325: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920878/HIVE-19325.branch-0.13.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10527/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10527/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10527/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-04-27 11:39:16.635 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10527/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-04-27 11:39:16.638 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 0dec595..4776497 master -> origin/master ea18769..c08f3b5 branch-3 -> origin/branch-3 + git reset --hard HEAD HEAD is now at 0dec595 HIVE-19124 : implement a basic major compactor for MM tables (Sergey Shelukhin, reviewed by Eugene Koifman and Gopal Vijayaraghavan) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 4776497 HIVE-19285: Add logs to the subclasses of MetaDataOperation (Marta Kuczora, via Peter Vary) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-04-27 11:39:30.593 + rm -rf ../yetus_PreCommit-HIVE-Build-10527 + mkdir ../yetus_PreCommit-HIVE-Build-10527 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10527 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10527/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/beeline/src/java/org/apache/hive/beeline/BeeLine.java: does not exist in index error: patch failed: beeline/src/java/org/apache/hive/beeline/BeeLine.java:555 Falling back to three-way merge... Applied patch to 'beeline/src/java/org/apache/hive/beeline/BeeLine.java' with conflicts. Going to apply patch with: git apply -p1 error: patch failed: beeline/src/java/org/apache/hive/beeline/BeeLine.java:555 Falling back to three-way merge... Applied patch to 'beeline/src/java/org/apache/hive/beeline/BeeLine.java' with conflicts. U beeline/src/java/org/apache/hive/beeline/BeeLine.java + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12920878 - PreCommit-HIVE-Build > Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1 > - > > Key: HIVE-19325 > URL: https://issues.apache.org/jira/browse/HIVE-19325 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 0.13.1 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Major > Fix For: 0.13.1 > > Attachments: HIVE-19325-0.13.1.patch, HIVE-19325-branch-0.13.1.patch, > HIVE-19325.0.13.1.patch, HIVE-19325.branch-0.13.1.patch > > > This Jira is not meant to be contributed back, but I'm using it as a way to > run unit tests against a patch file. > Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool > Remove beeline -n flag used for impersonation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19270) TestAcidOnTez tests are failing
[ https://issues.apache.org/jira/browse/HIVE-19270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456265#comment-16456265 ] Hive QA commented on HIVE-19270: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} /data/hiveptest/logs/PreCommit-HIVE-Build-10528/patches/PreCommit-HIVE-Build-10528.patch does not apply to master. Rebase required? Wrong Branch? See http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10528/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > TestAcidOnTez tests are failing > --- > > Key: HIVE-19270 > URL: https://issues.apache.org/jira/browse/HIVE-19270 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-19315.01.patch > > > Following tests are failing: > * testCtasTezUnion > * testNonStandardConversion01 > * testAcidInsertWithRemoveUnion > All of them have the similar failure: > {noformat} > Actual line 0 ac: {"writeid":1,"bucketid":536870913,"rowid":1} 1 2 > file:/home/hiveptest/35.193.47.6-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmp/org.apache.hadoop.hive.ql.TestAcidOnTez-1524409020904/warehouse/t/delta_001_001_0001/bucket_0 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19285) Add logs to the subclasses of MetaDataOperation
[ https://issues.apache.org/jira/browse/HIVE-19285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456271#comment-16456271 ] Marta Kuczora commented on HIVE-19285: -- Thanks a lot [~pvary] for committing the patch. > Add logs to the subclasses of MetaDataOperation > --- > > Key: HIVE-19285 > URL: https://issues.apache.org/jira/browse/HIVE-19285 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Minor > Fix For: 3.1.0 > > Attachments: HIVE-19285.1.patch, HIVE-19285.2.patch, > HIVE-19285.3.patch > > > Subclasses of MetaDataOperation are not writing anything to the logs. It > would be useful to have some INFO and DEBUG level logging in these classes. > The following classes are affected > * GetCatalogsOperation > * GetColumnsOperation > * GetFunctionsOperation > * GetSchemasOperation > * GetTablesOperation > * GetTypeInfoOperation > * GetTableTypesOperation > * GetCrossReferenceOperation > * GetPrimaryKeysOperation -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19270) TestAcidOnTez tests are failing
[ https://issues.apache.org/jira/browse/HIVE-19270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456358#comment-16456358 ] Hive QA commented on HIVE-19270: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920883/HIVE-19315.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 48 failed/errored test(s), 14275 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_empty_strings] (batchId=61) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_smb] (batchId=176) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=183) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[update_non_acid_table] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError (batchId=298) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.Te
[jira] [Commented] (HIVE-19270) TestAcidOnTez tests are failing
[ https://issues.apache.org/jira/browse/HIVE-19270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456359#comment-16456359 ] Sankar Hariappan commented on HIVE-19270: - [~ekoifman], I think you attached the wrong patch. > TestAcidOnTez tests are failing > --- > > Key: HIVE-19270 > URL: https://issues.apache.org/jira/browse/HIVE-19270 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-19315.01.patch > > > Following tests are failing: > * testCtasTezUnion > * testNonStandardConversion01 > * testAcidInsertWithRemoveUnion > All of them have the similar failure: > {noformat} > Actual line 0 ac: {"writeid":1,"bucketid":536870913,"rowid":1} 1 2 > file:/home/hiveptest/35.193.47.6-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmp/org.apache.hadoop.hive.ql.TestAcidOnTez-1524409020904/warehouse/t/delta_001_001_0001/bucket_0 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables
[ https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456378#comment-16456378 ] Hive QA commented on HIVE-19327: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10529/dev-support/hive-personality.sh | | git revision | master / 4776497 | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10529/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > qroupby_rollup_empty.q fails for insert-only transactional tables > - > > Key: HIVE-19327 > URL: https://issues.apache.org/jira/browse/HIVE-19327 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19327.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19212) Fix findbugs yetus pre-commit checks
[ https://issues.apache.org/jira/browse/HIVE-19212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456469#comment-16456469 ] Adam Szita commented on HIVE-19212: --- [~stakiar] the latest patch looks good, one small thing: can you please rename your variable 'buildScratchDir' to 'yetusBuildScratchDir' or something alike so we don't confuse it with the parent scratch dir? Otherwise +1 non binding > Fix findbugs yetus pre-commit checks > > > Key: HIVE-19212 > URL: https://issues.apache.org/jira/browse/HIVE-19212 > Project: Hive > Issue Type: Sub-task > Components: Testing Infrastructure >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19212.1.patch, HIVE-19212.2.patch > > > Follow up from HIVE-18883, the committed patch isn't working and Findbugs is > still not working. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18988) Support bootstrap replication of ACID tables
[ https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18988: Status: Open (was: Patch Available) > Support bootstrap replication of ACID tables > > > Key: HIVE-18988 > URL: https://issues.apache.org/jira/browse/HIVE-18988 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-18988.01.patch, HIVE-18988.02.patch, > HIVE-18988.03.patch, HIVE-18988.04.patch, HIVE-18988.05.patch > > > Bootstrapping of ACID tables, need special handling to replicate a stable > state of data. > - If ACID feature enables, then perform bootstrap dump for ACID tables with > in read txn. > -> Dump table/partition metadata. > -> Get the list of valid data files for a table using same logic as read txn > do. > -> Dump latest ValidWriteIdList as per current read txn. > - Set the valid last replication state such that it doesn't miss any open > txn started after triggering bootstrap dump. > - If any txns on-going which was opened before triggering bootstrap dump, > then it is not guaranteed that if open_txn event captured for these txns. > Also, if these txns are opened for streaming ingest case, then dumped ACID > table data may include data of open txns which impact snapshot isolation at > target. To avoid that, bootstrap dump should wait for timeout (new > configuration: hive.repl.bootstrap.dump.open.txn.timeout). After timeout, > just force abort those txns and continue. > - If any txns force aborted belongs to a streaming ingest case, then dumped > ACID table data may have aborted data too. So, it is necessary to replicate > the aborted write ids to target to mark those data invalid for any readers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18988) Support bootstrap replication of ACID tables
[ https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18988: Attachment: HIVE-18988.06.patch > Support bootstrap replication of ACID tables > > > Key: HIVE-18988 > URL: https://issues.apache.org/jira/browse/HIVE-18988 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-18988.01.patch, HIVE-18988.02.patch, > HIVE-18988.03.patch, HIVE-18988.04.patch, HIVE-18988.05.patch, > HIVE-18988.06.patch > > > Bootstrapping of ACID tables, need special handling to replicate a stable > state of data. > - If ACID feature enables, then perform bootstrap dump for ACID tables with > in read txn. > -> Dump table/partition metadata. > -> Get the list of valid data files for a table using same logic as read txn > do. > -> Dump latest ValidWriteIdList as per current read txn. > - Set the valid last replication state such that it doesn't miss any open > txn started after triggering bootstrap dump. > - If any txns on-going which was opened before triggering bootstrap dump, > then it is not guaranteed that if open_txn event captured for these txns. > Also, if these txns are opened for streaming ingest case, then dumped ACID > table data may include data of open txns which impact snapshot isolation at > target. To avoid that, bootstrap dump should wait for timeout (new > configuration: hive.repl.bootstrap.dump.open.txn.timeout). After timeout, > just force abort those txns and continue. > - If any txns force aborted belongs to a streaming ingest case, then dumped > ACID table data may have aborted data too. So, it is necessary to replicate > the aborted write ids to target to mark those data invalid for any readers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18988) Support bootstrap replication of ACID tables
[ https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18988: Status: Patch Available (was: Open) Attached 06.patch with fixes for review comments from [~maheshk114]. > Support bootstrap replication of ACID tables > > > Key: HIVE-18988 > URL: https://issues.apache.org/jira/browse/HIVE-18988 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-18988.01.patch, HIVE-18988.02.patch, > HIVE-18988.03.patch, HIVE-18988.04.patch, HIVE-18988.05.patch, > HIVE-18988.06.patch > > > Bootstrapping of ACID tables, need special handling to replicate a stable > state of data. > - If ACID feature enables, then perform bootstrap dump for ACID tables with > in read txn. > -> Dump table/partition metadata. > -> Get the list of valid data files for a table using same logic as read txn > do. > -> Dump latest ValidWriteIdList as per current read txn. > - Set the valid last replication state such that it doesn't miss any open > txn started after triggering bootstrap dump. > - If any txns on-going which was opened before triggering bootstrap dump, > then it is not guaranteed that if open_txn event captured for these txns. > Also, if these txns are opened for streaming ingest case, then dumped ACID > table data may include data of open txns which impact snapshot isolation at > target. To avoid that, bootstrap dump should wait for timeout (new > configuration: hive.repl.bootstrap.dump.open.txn.timeout). After timeout, > just force abort those txns and continue. > - If any txns force aborted belongs to a streaming ingest case, then dumped > ACID table data may have aborted data too. So, it is necessary to replicate > the aborted write ids to target to mark those data invalid for any readers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19327) qroupby_rollup_empty.q fails for insert-only transactional tables
[ https://issues.apache.org/jira/browse/HIVE-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456520#comment-16456520 ] Hive QA commented on HIVE-19327: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920882/HIVE-19327.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 58 failed/errored test(s), 14284 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.llap.daemon.impl.comparator.TestAMReporter.testMultipleAM (batchId=309) org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableChangingDatabase[Remote] (batchId=209) org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableRename[Remote] (batchId=209) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError (batchId=298) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apa
[jira] [Commented] (HIVE-19331) Repl load config in "with" clause not pass to Context.getStagingDir
[ https://issues.apache.org/jira/browse/HIVE-19331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456562#comment-16456562 ] Hive QA commented on HIVE-19331: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10530/dev-support/hive-personality.sh | | git revision | master / 4776497 | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10530/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Repl load config in "with" clause not pass to Context.getStagingDir > --- > > Key: HIVE-19331 > URL: https://issues.apache.org/jira/browse/HIVE-19331 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19331.1.patch > > > Another failure similar to HIVE-18626, causing exception when s3 credentials > are in "REPL LOAD" with clause. > {code} > Caused by: java.lang.IllegalStateException: Error getting FileSystem for > s3a://nat-yc-r7-nmys-beacon-cloud-s3-2/hive_incremental_testing.db/hive_incremental_testing_new_tabl...: > org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on > nat-yc-r7-nmys-beacon-cloud-s3-2: com.amazonaws.AmazonClientException: No AWS > Credentials provided by BasicAWSCredentialsProvider > EnvironmentVariableCredentialsProvider > SharedInstanceProfileCredentialsProvider : > com.amazonaws.AmazonClientException: Unable to load credentials from Amazon > EC2 metadata service: No AWS Credentials provided by > BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider > SharedInstanceProfileCredentialsProvider : > com.amazonaws.AmazonClientException: Unable to load credentials from Amazon > EC2 metadata service > at org.apache.hadoop.hive.ql.Context.getStagingDir(Context.java:359) > at > org.apache.hadoop.hive.ql.Context.getExternalScratchDir(Context.java:487) > at > org.apache.hadoop.hive.ql.Context.getExternalTmpPath(Context.java:565) > at > org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.loadTable(ImportSemanticAnalyzer.java:370) > at > org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.creat
[jira] [Commented] (HIVE-19239) Check for possible null timestamp fields during SerDe from Druid events
[ https://issues.apache.org/jira/browse/HIVE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456576#comment-16456576 ] slim bouguerra commented on HIVE-19239: --- [~ashutoshc] sorry missed that. Yes there is two cases: first one if we use an extraction function or virtual column projection that can return null or with the new added grouping sets feature this will become true. Here is the discussion thread. [https://github.com/druid-io/druid/pull/5659] > Check for possible null timestamp fields during SerDe from Druid events > --- > > Key: HIVE-19239 > URL: https://issues.apache.org/jira/browse/HIVE-19239 > Project: Hive > Issue Type: Bug >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Attachments: HIVE-19239.patch > > > Currently we do not check for possible null timestamp events. > This might lead to NPE. > This Patch add addition check for such case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19270) TestAcidOnTez tests are failing
[ https://issues.apache.org/jira/browse/HIVE-19270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-19270: -- Status: Open (was: Patch Available) > TestAcidOnTez tests are failing > --- > > Key: HIVE-19270 > URL: https://issues.apache.org/jira/browse/HIVE-19270 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Eugene Koifman >Priority: Major > > Following tests are failing: > * testCtasTezUnion > * testNonStandardConversion01 > * testAcidInsertWithRemoveUnion > All of them have the similar failure: > {noformat} > Actual line 0 ac: {"writeid":1,"bucketid":536870913,"rowid":1} 1 2 > file:/home/hiveptest/35.193.47.6-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmp/org.apache.hadoop.hive.ql.TestAcidOnTez-1524409020904/warehouse/t/delta_001_001_0001/bucket_0 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19270) TestAcidOnTez tests are failing
[ https://issues.apache.org/jira/browse/HIVE-19270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-19270: -- Attachment: (was: HIVE-19315.01.patch) > TestAcidOnTez tests are failing > --- > > Key: HIVE-19270 > URL: https://issues.apache.org/jira/browse/HIVE-19270 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Eugene Koifman >Priority: Major > > Following tests are failing: > * testCtasTezUnion > * testNonStandardConversion01 > * testAcidInsertWithRemoveUnion > All of them have the similar failure: > {noformat} > Actual line 0 ac: {"writeid":1,"bucketid":536870913,"rowid":1} 1 2 > file:/home/hiveptest/35.193.47.6-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmp/org.apache.hadoop.hive.ql.TestAcidOnTez-1524409020904/warehouse/t/delta_001_001_0001/bucket_0 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19270) TestAcidOnTez tests are failing
[ https://issues.apache.org/jira/browse/HIVE-19270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-19270: -- Status: Patch Available (was: Open) > TestAcidOnTez tests are failing > --- > > Key: HIVE-19270 > URL: https://issues.apache.org/jira/browse/HIVE-19270 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-19270.01.patch > > > Following tests are failing: > * testCtasTezUnion > * testNonStandardConversion01 > * testAcidInsertWithRemoveUnion > All of them have the similar failure: > {noformat} > Actual line 0 ac: {"writeid":1,"bucketid":536870913,"rowid":1} 1 2 > file:/home/hiveptest/35.193.47.6-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmp/org.apache.hadoop.hive.ql.TestAcidOnTez-1524409020904/warehouse/t/delta_001_001_0001/bucket_0 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19270) TestAcidOnTez tests are failing
[ https://issues.apache.org/jira/browse/HIVE-19270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456610#comment-16456610 ] Eugene Koifman commented on HIVE-19270: --- [~sankarh], correct one attached > TestAcidOnTez tests are failing > --- > > Key: HIVE-19270 > URL: https://issues.apache.org/jira/browse/HIVE-19270 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-19270.01.patch > > > Following tests are failing: > * testCtasTezUnion > * testNonStandardConversion01 > * testAcidInsertWithRemoveUnion > All of them have the similar failure: > {noformat} > Actual line 0 ac: {"writeid":1,"bucketid":536870913,"rowid":1} 1 2 > file:/home/hiveptest/35.193.47.6-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmp/org.apache.hadoop.hive.ql.TestAcidOnTez-1524409020904/warehouse/t/delta_001_001_0001/bucket_0 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19270) TestAcidOnTez tests are failing
[ https://issues.apache.org/jira/browse/HIVE-19270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-19270: -- Attachment: HIVE-19270.01.patch > TestAcidOnTez tests are failing > --- > > Key: HIVE-19270 > URL: https://issues.apache.org/jira/browse/HIVE-19270 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-19270.01.patch > > > Following tests are failing: > * testCtasTezUnion > * testNonStandardConversion01 > * testAcidInsertWithRemoveUnion > All of them have the similar failure: > {noformat} > Actual line 0 ac: {"writeid":1,"bucketid":536870913,"rowid":1} 1 2 > file:/home/hiveptest/35.193.47.6-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmp/org.apache.hadoop.hive.ql.TestAcidOnTez-1524409020904/warehouse/t/delta_001_001_0001/bucket_0 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18881) Lower Logging for FSStatsAggregator
[ https://issues.apache.org/jira/browse/HIVE-18881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456613#comment-16456613 ] Antal Sinkovits commented on HIVE-18881: The test failures are not related. > Lower Logging for FSStatsAggregator > --- > > Key: HIVE-18881 > URL: https://issues.apache.org/jira/browse/HIVE-18881 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Antal Sinkovits >Priority: Trivial > Labels: noob > Attachments: HIVE-18881.2.patch, HIVE-18881.patch > > > [https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/stats/fs/FSStatsAggregator.java#L101] > {code:java} > LOG.info("Read stats for : " + partID + "\t" + statType + "\t" + counter); > {code} > # All the other logging in this class is _debug_ or _error_ level logging. > This should be _debug_ as well > # Remove tab characters to allow splitting on tabs in any kind of > tab-separated file of log lines > # Use SLF4J parameterized logging -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18880) Change Log to Debug in CombineHiveInputFormat
[ https://issues.apache.org/jira/browse/HIVE-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456617#comment-16456617 ] Antal Sinkovits commented on HIVE-18880: The test failures are not related. > Change Log to Debug in CombineHiveInputFormat > - > > Key: HIVE-18880 > URL: https://issues.apache.org/jira/browse/HIVE-18880 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Antal Sinkovits >Priority: Trivial > Labels: noob > Attachments: HIVE-18880.2.patch, HIVE-18880.patch > > > [https://github.com/apache/hive/blob/1e74aca8d09ea2ef636311d2168b4d34198f7194/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java#L467] > {code:java} > private InputSplit[] getCombineSplits(JobConf job, int numSplits, > Map pathToPartitionInfo) { > ... > LOG.info("number of splits " + result.size()); > ... > } > {code} > [https://github.com/apache/hive/blob/1e74aca8d09ea2ef636311d2168b4d34198f7194/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java#L587] > {code:java} > public InputSplit[] getSplits(JobConf job, int numSplits) throws IOException { > ... > LOG.info("Number of all splits " + result.size()); > ... > } > {code} > # Capitalize "N"umber in the first logging to be consistent across all > logging statements > # Change the first logging message to be _debug_ level seeing as it's in a > private method. > It's an implementation logging and the entire total (most useful for a > client) is captured in _info_ level at the end of the public method. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18903) Lower Logging Level for ObjectStore
[ https://issues.apache.org/jira/browse/HIVE-18903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456631#comment-16456631 ] Antal Sinkovits commented on HIVE-18903: The test failures are not related. > Lower Logging Level for ObjectStore > --- > > Key: HIVE-18903 > URL: https://issues.apache.org/jira/browse/HIVE-18903 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: Antal Sinkovits >Priority: Minor > Labels: noob > Attachments: HIVE-18903.2.patch, HIVE-18903.patch > > > [https://github.com/apache/hive/blob/7c22d74c8d0eb0650adf6e84e0536127c103e46c/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java] > > {code:java} > 2018-03-01 06:51:58,051 INFO org.apache.hadoop.hive.metastore.ObjectStore: > [pool-4-thread-13]: ObjectStore, initialize called > 2018-03-01 06:51:58,052 INFO org.apache.hadoop.hive.metastore.ObjectStore: > [pool-4-thread-13]: Initialized ObjectStore > {code} > Noting actionable or all that useful here. Please lower to _debug_ or > _trace_ level logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19270) TestAcidOnTez tests are failing
[ https://issues.apache.org/jira/browse/HIVE-19270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456636#comment-16456636 ] Sankar Hariappan commented on HIVE-19270: - +1, pending tests > TestAcidOnTez tests are failing > --- > > Key: HIVE-19270 > URL: https://issues.apache.org/jira/browse/HIVE-19270 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-19270.01.patch > > > Following tests are failing: > * testCtasTezUnion > * testNonStandardConversion01 > * testAcidInsertWithRemoveUnion > All of them have the similar failure: > {noformat} > Actual line 0 ac: {"writeid":1,"bucketid":536870913,"rowid":1} 1 2 > file:/home/hiveptest/35.193.47.6-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmp/org.apache.hadoop.hive.ql.TestAcidOnTez-1524409020904/warehouse/t/delta_001_001_0001/bucket_0 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18533) Add option to use InProcessLauncher to submit spark jobs
[ https://issues.apache.org/jira/browse/HIVE-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456664#comment-16456664 ] Sahil Takiar commented on HIVE-18533: - [~lirui] could you take a look? Below is a brief description of the change. RB: https://reviews.apache.org/r/66071/ * Added the ability to launch jobs via Spark's {{InProcessLauncher}} rather than invoking {{bin/spark-submit}} * Users can pick which launcher they want to use, by default the {{spark-submit}} launcher is used * Renamed {{SparkClientImpl}} to {{AbstractSparkClient}} it contains the all the common logic between the two launchers ** {{AbstractSparkClient}} has two subclasses: {{SparkLauncherSparkClient}} which uses the {{InProcessLaucher}} and {{SparkSubmitSparkClient}} which uses {{spark-submit}} ** The changes to {{SparkClientImpl}} are mostly just re-factoring, I did my best to ensure there are no logic changes; the code is now mostly split between {{AbstractSparkClient}} and {{SparkSubmitSparkClient}} *** The biggest change in logic is that now {{SparkSubmitSparkClient#startDriver}} returns a {{Future}} object instead of a {{Thread}} object ** {{AbstractSparkClient}} has a number of {{abstract}} methods that decide how certain configuration options need to be set - e.g. how to add jars, specify the keytab / principal, etc. ** Its main method is {{launchDriver}} which specifies how to actually launcher the Spark app, it returns a {{Future}} object which is used to monitor the state of the Spark app * {{SparkLauncherSparkClient}} is essentially a wrapper around {{InProcessLauncher}} and it contains a custom {{Future}} implementation that monitors the underlying Spark app using the API's exposed by the {{InProcessLauncher}} * Added unit tests and a q-test > Add option to use InProcessLauncher to submit spark jobs > > > Key: HIVE-18533 > URL: https://issues.apache.org/jira/browse/HIVE-18533 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18533.1.patch, HIVE-18533.2.patch, > HIVE-18533.3.patch, HIVE-18533.4.patch, HIVE-18533.5.patch, > HIVE-18533.6.patch, HIVE-18533.7.patch, HIVE-18533.8.patch > > > See discussion in HIVE-16484 for details. > I think this will help with reducing the amount of time it takes to open a > HoS session + debuggability (no need launch a separate process to run a Spark > app). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19331) Repl load config in "with" clause not pass to Context.getStagingDir
[ https://issues.apache.org/jira/browse/HIVE-19331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456669#comment-16456669 ] Sahil Takiar commented on HIVE-19331: - We are doing some maintenance on ptest, and the Jira publishing functionality broke for your patch. Here are the results from Hive QA: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920888/HIVE-19331.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 57 failed/errored test(s), 14284 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=183) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] (batchId=253) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError (batchId=298) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hi
[jira] [Updated] (HIVE-18423) Support pushing computation from the optimizer for JDBC storage handler tables
[ https://issues.apache.org/jira/browse/HIVE-18423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18423: --- Resolution: Fixed Fix Version/s: (was: 3.1.0) 3.0.0 Status: Resolved (was: Patch Available) Pushed to master, branch-3. > Support pushing computation from the optimizer for JDBC storage handler tables > -- > > Key: HIVE-18423 > URL: https://issues.apache.org/jira/browse/HIVE-18423 > Project: Hive > Issue Type: Improvement >Reporter: Jonathan Doron >Assignee: Jonathan Doron >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-18423-branch-3.patch, HIVE-18423.1.patch, > HIVE-18423.2.patch, HIVE-18423.3.patch, HIVE-18423.4.patch, > HIVE-18423.5.patch, HIVE-18423.6.patch > > > Hive should support the usage of external JDBC tables (and not only external > tables that hold queries), so a Hive user would be able to use the external > table as an hive internal table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19323) Create metastore SQL install and upgrade scripts for 3.1
[ https://issues.apache.org/jira/browse/HIVE-19323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456739#comment-16456739 ] Hive QA commented on HIVE-19323: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 59 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10531/dev-support/hive-personality.sh | | git revision | master / 4776497 | | Default Java | 1.8.0_111 | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-10531/yetus/whitespace-eol.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-10531/yetus/whitespace-tabs.txt | | modules | C: . itests/hive-unit packaging standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10531/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create metastore SQL install and upgrade scripts for 3.1 > > > Key: HIVE-19323 > URL: https://issues.apache.org/jira/browse/HIVE-19323 > Project: Hive > Issue Type: Task > Components: Metastore >Affects Versions: 3.1.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19323.2.patch, HIVE-19323.patch > > > Now that we've branched for 3.0 we need to create SQL install and upgrade > scripts for 3.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19135: -- Attachment: HIVE-19135.3.patch > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19135.2.patch, HIVE-19135.3.patch, HIVE19135.patch > > > As part of upgrading to Hive 3 admins may wish to create new catalogs and > move some existing databases into those catalogs. We can do this by adding > options to schematool. This guarantees that only admins can do these > operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456775#comment-16456775 ] Alan Gates commented on HIVE-19135: --- Third version of the patch, with latest feedback from Thejas incorporated. > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19135.2.patch, HIVE-19135.3.patch, HIVE19135.patch > > > As part of upgrading to Hive 3 admins may wish to create new catalogs and > move some existing databases into those catalogs. We can do this by adding > options to schematool. This guarantees that only admins can do these > operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19331) Repl load config in "with" clause not pass to Context.getStagingDir
[ https://issues.apache.org/jira/browse/HIVE-19331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19331: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 3.0.0 Status: Resolved (was: Patch Available) Patch pushed to both master and branch-3. > Repl load config in "with" clause not pass to Context.getStagingDir > --- > > Key: HIVE-19331 > URL: https://issues.apache.org/jira/browse/HIVE-19331 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19331.1.patch > > > Another failure similar to HIVE-18626, causing exception when s3 credentials > are in "REPL LOAD" with clause. > {code} > Caused by: java.lang.IllegalStateException: Error getting FileSystem for > s3a://nat-yc-r7-nmys-beacon-cloud-s3-2/hive_incremental_testing.db/hive_incremental_testing_new_tabl...: > org.apache.hadoop.fs.s3a.AWSClientIOException: doesBucketExist on > nat-yc-r7-nmys-beacon-cloud-s3-2: com.amazonaws.AmazonClientException: No AWS > Credentials provided by BasicAWSCredentialsProvider > EnvironmentVariableCredentialsProvider > SharedInstanceProfileCredentialsProvider : > com.amazonaws.AmazonClientException: Unable to load credentials from Amazon > EC2 metadata service: No AWS Credentials provided by > BasicAWSCredentialsProvider EnvironmentVariableCredentialsProvider > SharedInstanceProfileCredentialsProvider : > com.amazonaws.AmazonClientException: Unable to load credentials from Amazon > EC2 metadata service > at org.apache.hadoop.hive.ql.Context.getStagingDir(Context.java:359) > at > org.apache.hadoop.hive.ql.Context.getExternalScratchDir(Context.java:487) > at > org.apache.hadoop.hive.ql.Context.getExternalTmpPath(Context.java:565) > at > org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.loadTable(ImportSemanticAnalyzer.java:370) > at > org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.createReplImportTasks(ImportSemanticAnalyzer.java:926) > at > org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.prepareImport(ImportSemanticAnalyzer.java:329) > at > org.apache.hadoop.hive.ql.parse.repl.load.message.TableHandler.handle(TableHandler.java:43) > ... 24 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19323) Create metastore SQL install and upgrade scripts for 3.1
[ https://issues.apache.org/jira/browse/HIVE-19323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456784#comment-16456784 ] Hive QA commented on HIVE-19323: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920910/HIVE-19323.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 53 failed/errored test(s), 14284 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic] (batchId=253) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError (batchId=298) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.jdbc.TestActivePassiveHA.testClientConnectionsOnFailover (batchId=242) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=242) org.apache.hive.minikdc.TestJdbcWithDBToke
[jira] [Updated] (HIVE-19198) Few flaky hcatalog tests
[ https://issues.apache.org/jira/browse/HIVE-19198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19198: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Patch pushed to master. > Few flaky hcatalog tests > > > Key: HIVE-19198 > URL: https://issues.apache.org/jira/browse/HIVE-19198 > Project: Hive > Issue Type: Sub-task >Reporter: Ashutosh Chauhan >Assignee: Daniel Dai >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19198.1.patch, HIVE-19198.2.patch > > > TestPermsGrp : Consider removing this since hcat cli is not widely used. > TestHCatPartitionPublish.testPartitionPublish > TestHCatMultiOutputFormat.testOutputFormat -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456824#comment-16456824 ] Thejas M Nair commented on HIVE-19135: -- +1 > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19135.2.patch, HIVE-19135.3.patch, HIVE19135.patch > > > As part of upgrading to Hive 3 admins may wish to create new catalogs and > move some existing databases into those catalogs. We can do this by adding > options to schematool. This guarantees that only admins can do these > operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19212) Fix findbugs yetus pre-commit checks
[ https://issues.apache.org/jira/browse/HIVE-19212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19212: Attachment: HIVE-19212.3.patch > Fix findbugs yetus pre-commit checks > > > Key: HIVE-19212 > URL: https://issues.apache.org/jira/browse/HIVE-19212 > Project: Hive > Issue Type: Sub-task > Components: Testing Infrastructure >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19212.1.patch, HIVE-19212.2.patch, > HIVE-19212.3.patch > > > Follow up from HIVE-18883, the committed patch isn't working and Findbugs is > still not working. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19212) Fix findbugs yetus pre-commit checks
[ https://issues.apache.org/jira/browse/HIVE-19212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456835#comment-16456835 ] Sahil Takiar commented on HIVE-19212: - [~szita] addressed your comments. [~pvary] could you take a look? > Fix findbugs yetus pre-commit checks > > > Key: HIVE-19212 > URL: https://issues.apache.org/jira/browse/HIVE-19212 > Project: Hive > Issue Type: Sub-task > Components: Testing Infrastructure >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19212.1.patch, HIVE-19212.2.patch, > HIVE-19212.3.patch > > > Follow up from HIVE-18883, the committed patch isn't working and Findbugs is > still not working. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18880) Change Log to Debug in CombineHiveInputFormat
[ https://issues.apache.org/jira/browse/HIVE-18880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18880: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks [~asinkovits] for the contribution! > Change Log to Debug in CombineHiveInputFormat > - > > Key: HIVE-18880 > URL: https://issues.apache.org/jira/browse/HIVE-18880 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Antal Sinkovits >Priority: Trivial > Labels: noob > Fix For: 3.1.0 > > Attachments: HIVE-18880.2.patch, HIVE-18880.patch > > > [https://github.com/apache/hive/blob/1e74aca8d09ea2ef636311d2168b4d34198f7194/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java#L467] > {code:java} > private InputSplit[] getCombineSplits(JobConf job, int numSplits, > Map pathToPartitionInfo) { > ... > LOG.info("number of splits " + result.size()); > ... > } > {code} > [https://github.com/apache/hive/blob/1e74aca8d09ea2ef636311d2168b4d34198f7194/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java#L587] > {code:java} > public InputSplit[] getSplits(JobConf job, int numSplits) throws IOException { > ... > LOG.info("Number of all splits " + result.size()); > ... > } > {code} > # Capitalize "N"umber in the first logging to be consistent across all > logging statements > # Change the first logging message to be _debug_ level seeing as it's in a > private method. > It's an implementation logging and the entire total (most useful for a > client) is captured in _info_ level at the end of the public method. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19311) Partition and bucketing support for “load data” statement
[ https://issues.apache.org/jira/browse/HIVE-19311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456849#comment-16456849 ] Deepak Jaiswal commented on HIVE-19311: --- A gentle reminder [~ashutoshc] [~jcamachorodriguez] [~ekoifman] and [~vgarg] > Partition and bucketing support for “load data” statement > - > > Key: HIVE-19311 > URL: https://issues.apache.org/jira/browse/HIVE-19311 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19311.1.patch, HIVE-19311.2.patch, > HIVE-19311.3.patch > > > Currently, "load data" statement is very limited. It errors out if any of the > information is missing such as partitioning info if table is partitioned or > appropriate names when table is bucketed. > It should be able to launch an insert job to load the data instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing
[ https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456860#comment-16456860 ] Jason Dere commented on HIVE-18910: --- Just add that one comment per RB review, otherwise +1 on my end pending tests. > Migrate to Murmur hash for shuffle and bucketing > > > Key: HIVE-18910 > URL: https://issues.apache.org/jira/browse/HIVE-18910 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, > HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.13.patch, > HIVE-18910.14.patch, HIVE-18910.15.patch, HIVE-18910.16.patch, > HIVE-18910.17.patch, HIVE-18910.18.patch, HIVE-18910.19.patch, > HIVE-18910.2.patch, HIVE-18910.20.patch, HIVE-18910.21.patch, > HIVE-18910.22.patch, HIVE-18910.23.patch, HIVE-18910.24.patch, > HIVE-18910.25.patch, HIVE-18910.26.patch, HIVE-18910.27.patch, > HIVE-18910.28.patch, HIVE-18910.29.patch, HIVE-18910.3.patch, > HIVE-18910.30.patch, HIVE-18910.31.patch, HIVE-18910.32.patch, > HIVE-18910.33.patch, HIVE-18910.34.patch, HIVE-18910.35.patch, > HIVE-18910.36.patch, HIVE-18910.36.patch, HIVE-18910.37.patch, > HIVE-18910.38.patch, HIVE-18910.39.patch, HIVE-18910.4.patch, > HIVE-18910.40.patch, HIVE-18910.41.patch, HIVE-18910.42.patch, > HIVE-18910.43.patch, HIVE-18910.44.patch, HIVE-18910.5.patch, > HIVE-18910.6.patch, HIVE-18910.7.patch, HIVE-18910.8.patch, HIVE-18910.9.patch > > > Hive uses JAVA hash which is not as good as murmur for better distribution > and efficiency in bucketing a table. > Migrate to murmur hash but still keep backward compatibility for existing > users so that they dont have to reload the existing tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18903) Lower Logging Level for ObjectStore
[ https://issues.apache.org/jira/browse/HIVE-18903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18903: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks [~asinkovits] for the contribution! > Lower Logging Level for ObjectStore > --- > > Key: HIVE-18903 > URL: https://issues.apache.org/jira/browse/HIVE-18903 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: Antal Sinkovits >Priority: Minor > Labels: noob > Fix For: 3.1.0 > > Attachments: HIVE-18903.2.patch, HIVE-18903.patch > > > [https://github.com/apache/hive/blob/7c22d74c8d0eb0650adf6e84e0536127c103e46c/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java] > > {code:java} > 2018-03-01 06:51:58,051 INFO org.apache.hadoop.hive.metastore.ObjectStore: > [pool-4-thread-13]: ObjectStore, initialize called > 2018-03-01 06:51:58,052 INFO org.apache.hadoop.hive.metastore.ObjectStore: > [pool-4-thread-13]: Initialized ObjectStore > {code} > Noting actionable or all that useful here. Please lower to _debug_ or > _trace_ level logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19340) Disable timeout of transactions opened by replication task at target cluster
[ https://issues.apache.org/jira/browse/HIVE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456868#comment-16456868 ] Sergey Shelukhin commented on HIVE-19340: - Never aborting a transaction seems dangerous. If it does get stuck forever, what is the user supposed to do with it? And if the user aborts it manually, it's the same as timeout, just aggravating to the user. You still have to handle when user aborts it. If it's ok to get stuck for a long time, why not just increase the heartbeat timeout for it? > Disable timeout of transactions opened by replication task at target cluster > > > Key: HIVE-19340 > URL: https://issues.apache.org/jira/browse/HIVE-19340 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Fix For: 3.0.0 > > Attachments: HIVE-19340.01.patch > > > The transactions opened by applying EVENT_OPEN_TXN should never be aborted > automatically due to time-out. Aborting of transaction started by replication > task may leads to inconsistent state at target which needs additional > overhead to clean-up. So, it is proposed to mark the transactions opened by > replication task as special ones and shouldn't be aborted if heart beat is > lost. This helps to ensure all ABORT and COMMIT events will always find the > corresponding txn at target to operate. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19340) Disable timeout of transactions opened by replication task at target cluster
[ https://issues.apache.org/jira/browse/HIVE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456869#comment-16456869 ] Sergey Shelukhin commented on HIVE-19340: - cc [~ekoifman] > Disable timeout of transactions opened by replication task at target cluster > > > Key: HIVE-19340 > URL: https://issues.apache.org/jira/browse/HIVE-19340 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Fix For: 3.0.0 > > Attachments: HIVE-19340.01.patch > > > The transactions opened by applying EVENT_OPEN_TXN should never be aborted > automatically due to time-out. Aborting of transaction started by replication > task may leads to inconsistent state at target which needs additional > overhead to clean-up. So, it is proposed to mark the transactions opened by > replication task as special ones and shouldn't be aborted if heart beat is > lost. This helps to ensure all ABORT and COMMIT events will always find the > corresponding txn at target to operate. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19340) Disable timeout of transactions opened by replication task at target cluster
[ https://issues.apache.org/jira/browse/HIVE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456868#comment-16456868 ] Sergey Shelukhin edited comment on HIVE-19340 at 4/27/18 6:25 PM: -- Never aborting a transaction seems dangerous. If it does get stuck forever, what is the user supposed to do with it? And if the user aborts it manually, it's the same as timeout, just aggravating to the user. You still have to handle when user aborts it. If it's ok to get stuck for a long time but not forever, why not just increase the heartbeat timeout for it? was (Author: sershe): Never aborting a transaction seems dangerous. If it does get stuck forever, what is the user supposed to do with it? And if the user aborts it manually, it's the same as timeout, just aggravating to the user. You still have to handle when user aborts it. If it's ok to get stuck for a long time, why not just increase the heartbeat timeout for it? > Disable timeout of transactions opened by replication task at target cluster > > > Key: HIVE-19340 > URL: https://issues.apache.org/jira/browse/HIVE-19340 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, pull-request-available, replication > Fix For: 3.0.0 > > Attachments: HIVE-19340.01.patch > > > The transactions opened by applying EVENT_OPEN_TXN should never be aborted > automatically due to time-out. Aborting of transaction started by replication > task may leads to inconsistent state at target which needs additional > overhead to clean-up. So, it is proposed to mark the transactions opened by > replication task as special ones and shouldn't be aborted if heart beat is > lost. This helps to ensure all ABORT and COMMIT events will always find the > corresponding txn at target to operate. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18958) Fix Spark config warnings
[ https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456872#comment-16456872 ] Hive QA commented on HIVE-18958: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 54s{color} | {color:green} root: The patch generated 0 new + 68 unchanged - 8 fixed = 68 total (was 76) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} The patch util passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} ql: The patch generated 0 new + 19 unchanged - 3 fixed = 19 total (was 22) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} spark-client: The patch generated 0 new + 21 unchanged - 5 fixed = 21 total (was 26) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense xml javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10532/dev-support/hive-personality.sh | | git revision | master / e0b3182 | | Default Java | 1.8.0_111 | | modules | C: . itests/util ql spark-client U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10532/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix Spark config warnings > - > > Key: HIVE-18958 > URL: https://issues.apache.org/jira/browse/HIVE-18958 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, > HIVE-18958.03.patch > > > Getting a few configuration warnings in the logs that we should fix: > {code} > 2018-03-14T10:06:19,164 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has > been deprecated as of Spark 2.3 and may be removed in the future. Please use > the n
[jira] [Commented] (HIVE-19288) Implement protobuf logging hive hook.
[ https://issues.apache.org/jira/browse/HIVE-19288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456886#comment-16456886 ] Ashutosh Chauhan commented on HIVE-19288: - +1 LGTM. [~harishjp] can you please also create a follow-up jira to refactor this hook to extend from TezHook once hive upgrades its Hive dependency. AFAICS you won't be able to reuse hook from Tez as is since this Hive hook provide extra info about tables, queryid etc. So, it can extend Tez Hook but cant replace it. > Implement protobuf logging hive hook. > - > > Key: HIVE-19288 > URL: https://issues.apache.org/jira/browse/HIVE-19288 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Harish Jaiprakash >Assignee: Harish Jaiprakash >Priority: Major > Attachments: HIVE-19288.01.patch, HIVE-19288.02.patch > > > Implement a protobuf based logger which will log hive hook events into date > partitioned directories. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19320) MapRedLocalTask is printing child log to stderr and stdout
[ https://issues.apache.org/jira/browse/HIVE-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456889#comment-16456889 ] Aihua Xu commented on HIVE-19320: - [~pvary] You are right. I think it's redirected to HS2 log now with that change, but we can remove the output to the console. Do you agree? > MapRedLocalTask is printing child log to stderr and stdout > -- > > Key: HIVE-19320 > URL: https://issues.apache.org/jira/browse/HIVE-19320 > Project: Hive > Issue Type: Sub-task > Components: Logging >Affects Versions: 3.0.0 >Reporter: Aihua Xu >Priority: Major > > In this line, local child MR task is printing the logs to stderr and stdout. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java#L341 > stderr/stdout should capture the service running log rather than the query > execution output. Those should be reasonable to go to HS2 log and propagate > to beeline console. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18958) Fix Spark config warnings
[ https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456895#comment-16456895 ] Hive QA commented on HIVE-18958: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920894/HIVE-18958.03.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 55 failed/errored test(s), 14284 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_smb] (batchId=176) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=183) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError (batchId=298) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadM
[jira] [Updated] (HIVE-19330) multi_insert_partitioned.q fails with "src table does not exist" message.
[ https://issues.apache.org/jira/browse/HIVE-19330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19330: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to master/branch-3 > multi_insert_partitioned.q fails with "src table does not exist" message. > - > > Key: HIVE-19330 > URL: https://issues.apache.org/jira/browse/HIVE-19330 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19330.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19330) multi_insert_partitioned.q fails with "src table does not exist" message.
[ https://issues.apache.org/jira/browse/HIVE-19330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456913#comment-16456913 ] Hive QA commented on HIVE-19330: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 20s{color} | {color:red} /data/hiveptest/logs/PreCommit-HIVE-Build-10533/patches/PreCommit-HIVE-Build-10533.patch does not apply to master. Rebase required? Wrong Branch? See http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10533/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > multi_insert_partitioned.q fails with "src table does not exist" message. > - > > Key: HIVE-19330 > URL: https://issues.apache.org/jira/browse/HIVE-19330 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19330.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19342) Update Wiki with new murmur hash UDF
[ https://issues.apache.org/jira/browse/HIVE-19342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-19342: - > Update Wiki with new murmur hash UDF > > > Key: HIVE-19342 > URL: https://issues.apache.org/jira/browse/HIVE-19342 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
[ https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated HIVE-19325: --- Status: Open (was: Patch Available) > Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1 > - > > Key: HIVE-19325 > URL: https://issues.apache.org/jira/browse/HIVE-19325 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 0.13.1 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Major > Fix For: 0.13.1 > > Attachments: HIVE-19325-branch-0.13.1.patch > > > This Jira is not meant to be contributed back, but I'm using it as a way to > run unit tests against a patch file. > Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool > Remove beeline -n flag used for impersonation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
[ https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated HIVE-19325: --- Attachment: (was: HIVE-19325.0.13.1.patch) > Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1 > - > > Key: HIVE-19325 > URL: https://issues.apache.org/jira/browse/HIVE-19325 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 0.13.1 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Major > Fix For: 0.13.1 > > Attachments: HIVE-19325-branch-0.13.1.patch > > > This Jira is not meant to be contributed back, but I'm using it as a way to > run unit tests against a patch file. > Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool > Remove beeline -n flag used for impersonation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
[ https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated HIVE-19325: --- Attachment: (was: HIVE-19325-0.13.1.patch) > Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1 > - > > Key: HIVE-19325 > URL: https://issues.apache.org/jira/browse/HIVE-19325 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 0.13.1 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Major > Fix For: 0.13.1 > > Attachments: HIVE-19325-branch-0.13.1.patch > > > This Jira is not meant to be contributed back, but I'm using it as a way to > run unit tests against a patch file. > Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool > Remove beeline -n flag used for impersonation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
[ https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated HIVE-19325: --- Attachment: (was: HIVE-19325.branch-0.13.1.patch) > Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1 > - > > Key: HIVE-19325 > URL: https://issues.apache.org/jira/browse/HIVE-19325 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 0.13.1 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Major > Fix For: 0.13.1 > > Attachments: HIVE-19325-branch-0.13.1.patch > > > This Jira is not meant to be contributed back, but I'm using it as a way to > run unit tests against a patch file. > Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool > Remove beeline -n flag used for impersonation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19325) Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1
[ https://issues.apache.org/jira/browse/HIVE-19325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alejandro Fernandez updated HIVE-19325: --- Status: Patch Available (was: Open) > Custom Hive Patch - Remove beeline -n flag in Hive 0.13.1 > - > > Key: HIVE-19325 > URL: https://issues.apache.org/jira/browse/HIVE-19325 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 0.13.1 >Reporter: Alejandro Fernandez >Assignee: Alejandro Fernandez >Priority: Major > Fix For: 0.13.1 > > Attachments: HIVE-19325-branch-0.13.1.patch > > > This Jira is not meant to be contributed back, but I'm using it as a way to > run unit tests against a patch file. > Specifically, TestBeelineWithArgs, ProxyAuthTest, and TestSchemaTool > Remove beeline -n flag used for impersonation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18958) Fix Spark config warnings
[ https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456934#comment-16456934 ] Sahil Takiar commented on HIVE-18958: - +1 LGTM > Fix Spark config warnings > - > > Key: HIVE-18958 > URL: https://issues.apache.org/jira/browse/HIVE-18958 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, > HIVE-18958.03.patch > > > Getting a few configuration warnings in the logs that we should fix: > {code} > 2018-03-14T10:06:19,164 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has > been deprecated as of Spark 2.3 and may be removed in the future. Please use > the new key 'spark.driver.memoryOverhead' instead. > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not > supported any more because Spark doesn't use Akka since 2.0 > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' > has been deprecated as of Spark 2.3 and may be removed in the future. Please > use the new key 'spark.executor.memoryOverhead' instead. > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.server.connect.timeout=9 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.threads=8 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.connect.timeout=3 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.secret.bits=256 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.max.size=52428800 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18958) Fix Spark config warnings
[ https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456937#comment-16456937 ] Bharathkrishna Guruvayoor Murali commented on HIVE-18958: - I will verify once if any test failures are related and update if required. > Fix Spark config warnings > - > > Key: HIVE-18958 > URL: https://issues.apache.org/jira/browse/HIVE-18958 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, > HIVE-18958.03.patch > > > Getting a few configuration warnings in the logs that we should fix: > {code} > 2018-03-14T10:06:19,164 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has > been deprecated as of Spark 2.3 and may be removed in the future. Please use > the new key 'spark.driver.memoryOverhead' instead. > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not > supported any more because Spark doesn't use Akka since 2.0 > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' > has been deprecated as of Spark 2.3 and may be removed in the future. Please > use the new key 'spark.executor.memoryOverhead' instead. > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.server.connect.timeout=9 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.threads=8 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.connect.timeout=3 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.secret.bits=256 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.max.size=52428800 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19332) Disable compute.query.using.stats for external table
[ https://issues.apache.org/jira/browse/HIVE-19332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456965#comment-16456965 ] Jason Dere commented on HIVE-19332: --- [~gopalv] has pointed out that both this and HIVE-19333 can be accomplished by preventing external tables stats from showing up as complete stats. > Disable compute.query.using.stats for external table > > > Key: HIVE-19332 > URL: https://issues.apache.org/jira/browse/HIVE-19332 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Priority: Major > > Hive can use statistics to answer queries like count(*). This can be > problematic on external tables where another tool might add files that Hive > doesn’t know about. In that case Hive will return incorrect results. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19332) Disable compute.query.using.stats for external table
[ https://issues.apache.org/jira/browse/HIVE-19332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19332: -- Attachment: HIVE-19332.1.patch > Disable compute.query.using.stats for external table > > > Key: HIVE-19332 > URL: https://issues.apache.org/jira/browse/HIVE-19332 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Priority: Major > Attachments: HIVE-19332.1.patch > > > Hive can use statistics to answer queries like count(*). This can be > problematic on external tables where another tool might add files that Hive > doesn’t know about. In that case Hive will return incorrect results. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19332) Disable compute.query.using.stats for external table
[ https://issues.apache.org/jira/browse/HIVE-19332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456981#comment-16456981 ] Jason Dere commented on HIVE-19332: --- Initial patch - external table stats will show up as not up-to-date. [~gopalv] [~jcamachorodriguez] does this approach look good? If so then I will try to add a qtest. > Disable compute.query.using.stats for external table > > > Key: HIVE-19332 > URL: https://issues.apache.org/jira/browse/HIVE-19332 > Project: Hive > Issue Type: Sub-task >Reporter: Jason Dere >Priority: Major > Attachments: HIVE-19332.1.patch > > > Hive can use statistics to answer queries like count(*). This can be > problematic on external tables where another tool might add files that Hive > doesn’t know about. In that case Hive will return incorrect results. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19330) multi_insert_partitioned.q fails with "src table does not exist" message.
[ https://issues.apache.org/jira/browse/HIVE-19330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456982#comment-16456982 ] Hive QA commented on HIVE-19330: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920892/HIVE-19330.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 54 failed/errored test(s), 14284 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_stats] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_text_vec_part] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=167) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=183) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_reordering_values] (batchId=110) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=231) org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys (batchId=231) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testDifferentFiltersAreNotMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testSameFiltersMatched (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched0 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestOperatorCmp.testUnrelatedFiltersAreNotMatched1 (batchId=298) org.apache.hadoop.hive.ql.plan.mapping.TestReOptimization.testNotReExecutedIfAssertionError (batchId=298) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=240) org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime (batchId=242) org.apache.hive.jdbc.TestTriggersWo
[jira] [Commented] (HIVE-19337) Partition whitelist regex doesn't work (and never did)
[ https://issues.apache.org/jira/browse/HIVE-19337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456986#comment-16456986 ] Hive QA commented on HIVE-19337: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920899/HIVE-19337.01.branch-2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10534/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10534/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10534/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-04-27 20:12:04.291 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10534/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-04-27 20:12:04.295 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 331dd57..6f54709 master -> origin/master c20f7e1..7cbd648 branch-3 -> origin/branch-3 + git reset --hard HEAD HEAD is now at 331dd57 HIVE-18903: Lower Logging Level for ObjectStore (Antal Sinkovits, reviewed by Sahil Takiar) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 6f54709 HIVE-19330: multi_insert_partitioned.q fails with "src table does not exist" message. (Steve Yeom, reviewed by Jason Dere) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-04-27 20:12:08.027 + rm -rf ../yetus_PreCommit-HIVE-Build-10534 + mkdir ../yetus_PreCommit-HIVE-Build-10534 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10534 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10534/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index error: metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index error: src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12920899 - PreCommit-HIVE-Build > Partition whitelist regex doesn't work (and never did) > -- > > Key: HIVE-19337 > URL: https://issues.apache.org/jira/browse/HIVE-19337 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.3.3 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-19337.01.branch-2.patch > > > {{ObjectStore.setConf()}} has the following code: > {code:java} > String partitionValidationRegex = > > hiveConf.get(HiveConf.ConfVars.METASTORE_PARTITION_NAME_WHITELIST_PATTERN.name()); > {code} > Note that it uses name() method which returns enum name > (METASTORE_PARTITION_NAME_WHITELIST_PATTERN) rather then .varname > As a result the regex will always be null. > The code was introduced as part of > HIVE-7223 Support generic PartitionSpecs in Metastore partition-functions > So looks like this was broken since the original code drop. This is fixed in > Hive3 - probably when [~alangates] reworked access to configuration > (HIVE-17733) so it isn't a bug in Hive-3. > [~stakiar_impala_496e] FYI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19211) New streaming ingest API and support for dynamic partitioning
[ https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456988#comment-16456988 ] Hive QA commented on HIVE-19211: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12920905/HIVE-19211.8.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10535/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10535/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10535/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-04-27 20:15:00.052 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10535/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-04-27 20:15:00.055 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 6f54709 HIVE-19330: multi_insert_partitioned.q fails with "src table does not exist" message. (Steve Yeom, reviewed by Jason Dere) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 6f54709 HIVE-19330: multi_insert_partitioned.q fails with "src table does not exist" message. (Steve Yeom, reviewed by Jason Dere) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-04-27 20:15:00.647 + rm -rf ../yetus_PreCommit-HIVE-Build-10535 + mkdir ../yetus_PreCommit-HIVE-Build-10535 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10535 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10535/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in index error: a/itests/hive-unit/pom.xml: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java: does not exist in index error: a/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreUtils.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcRecordUpdater.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DbTxnManager.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not exist in index error: a/streaming/pom.xml: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/AbstractRecordWriter.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/ConnectionError.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/DelimitedInputWriter.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/HeartBeatFailure.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/HiveEndPoint.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/ImpersonationFailed.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/InvalidColumn.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/InvalidPartition.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/InvalidTable.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/InvalidTrasactionState.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/PartitionCreationFailed.java: does not exist in index error: a/streaming/src/java/org/apache/hive/streaming/QueryFailedEx
[jira] [Commented] (HIVE-19206) Automatic memory management for open streaming writers
[ https://issues.apache.org/jira/browse/HIVE-19206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456992#comment-16456992 ] Prasanth Jayachandran commented on HIVE-19206: -- - Added config to disable auto flush (mainly for testing) - minor fixes > Automatic memory management for open streaming writers > -- > > Key: HIVE-19206 > URL: https://issues.apache.org/jira/browse/HIVE-19206 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19206.1.patch, HIVE-19206.2.patch > > > Problem: > When there are 100s of record updaters open, the amount of memory required > by orc writers keeps growing because of ORC's internal buffers. This can lead > to potential high GC or OOM during streaming ingest. > Solution: > The high level idea is for the streaming connection to remember all the open > record updaters and flush the record updater periodically (at some interval). > Records written to each record updater can be used as a metric to determine > the candidate record updaters for flushing. > If stripe size of orc file is 64MB, the default memory management check > happens only after every 5000 rows which may which may be too late when there > are too many concurrent writers in a process. Example case would be 100 > writers open and each of them have almost full stripe of 64MB buffered data, > this would take 100*64MB ~=6GB of memory. When all of the record writers > flush, the memory usage drops down to 100*~2MB which is just ~200MB memory > usage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19206) Automatic memory management for open streaming writers
[ https://issues.apache.org/jira/browse/HIVE-19206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19206: - Attachment: HIVE-19206.2.patch > Automatic memory management for open streaming writers > -- > > Key: HIVE-19206 > URL: https://issues.apache.org/jira/browse/HIVE-19206 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19206.1.patch, HIVE-19206.2.patch > > > Problem: > When there are 100s of record updaters open, the amount of memory required > by orc writers keeps growing because of ORC's internal buffers. This can lead > to potential high GC or OOM during streaming ingest. > Solution: > The high level idea is for the streaming connection to remember all the open > record updaters and flush the record updater periodically (at some interval). > Records written to each record updater can be used as a metric to determine > the candidate record updaters for flushing. > If stripe size of orc file is 64MB, the default memory management check > happens only after every 5000 rows which may which may be too late when there > are too many concurrent writers in a process. Example case would be 100 > writers open and each of them have almost full stripe of 64MB buffered data, > this would take 100*64MB ~=6GB of memory. When all of the record writers > flush, the memory usage drops down to 100*~2MB which is just ~200MB memory > usage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18958) Fix Spark config warnings
[ https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-18958: Attachment: HIVE-18958.testDiff.patch > Fix Spark config warnings > - > > Key: HIVE-18958 > URL: https://issues.apache.org/jira/browse/HIVE-18958 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, > HIVE-18958.03.patch, HIVE-18958.testDiff.patch > > > Getting a few configuration warnings in the logs that we should fix: > {code} > 2018-03-14T10:06:19,164 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has > been deprecated as of Spark 2.3 and may be removed in the future. Please use > the new key 'spark.driver.memoryOverhead' instead. > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not > supported any more because Spark doesn't use Akka since 2.0 > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' > has been deprecated as of Spark 2.3 and may be removed in the future. Please > use the new key 'spark.executor.memoryOverhead' instead. > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.server.connect.timeout=9 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.threads=8 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.connect.timeout=3 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.secret.bits=256 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.max.size=52428800 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18958) Fix Spark config warnings
[ https://issues.apache.org/jira/browse/HIVE-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457005#comment-16457005 ] Bharathkrishna Guruvayoor Murali commented on HIVE-18958: - [~stakiar] Attached the file HIVE-18958.testDiff.patch which contains the test output differences. All the builds were successful but noticed differences in q.out files. > Fix Spark config warnings > - > > Key: HIVE-18958 > URL: https://issues.apache.org/jira/browse/HIVE-18958 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-18958.01.patch, HIVE-18958.02.patch, > HIVE-18958.03.patch, HIVE-18958.testDiff.patch > > > Getting a few configuration warnings in the logs that we should fix: > {code} > 2018-03-14T10:06:19,164 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.driver.memoryOverhead' has > been deprecated as of Spark 2.3 and may be removed in the future. Please use > the new key 'spark.driver.memoryOverhead' instead. > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key spark.akka.logLifecycleEvents is not > supported any more because Spark doesn't use Akka since 2.0 > 2018-03-14T10:06:19,165 WARN [d5ade9e4-9354-40f1-8f74-631f373709b3 main] > spark.SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' > has been deprecated as of Spark 2.3 and may be removed in the future. Please > use the new key 'spark.executor.memoryOverhead' instead. > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.server.connect.timeout=9 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.threads=8 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.connect.timeout=3 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.secret.bits=256 > 2018-03-14T10:06:20,351 INFO > [RemoteDriver-stderr-redir-d5ade9e4-9354-40f1-8f74-631f373709b3 main] > client.SparkClientImpl: Warning: Ignoring non-spark config property: > hive.spark.client.rpc.max.size=52428800 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)