[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Attachment: HIVE-22954.05.patch Status: Patch Available (was: In Progress) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, > HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, > HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Status: In Progress (was: Patch Available) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, > HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, > HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22126) hive-exec packaging should shade guava
[ https://issues.apache.org/jira/browse/HIVE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Chung updated HIVE-22126: Attachment: HIVE-22126.06.patch Status: Patch Available (was: Open) [^HIVE-22126.06.patch] # Because of dependency of calcite-core itself on guava, its guava usages must be shaded, too. So I decided to include its family modules, calcite-druid, calcite-linq4j and org.apache.calcite.avatica:avatica. # After calcite-core is included in hive-exec.jar, some modules required by calcite-core, json-path, commons-compiler, janino, and snakeyaml, are not resolved automatically by maven. ## At first, I tried to include them in hive-exec.jar. But hive-exec.jar loading was failed because of jar signing problem. (caused by codehaus commons-compiler) ## So I decided to list them up on the dependency part of pom.xml files of modules for running unit tests. # Some test modules like itests/hive-blobstore has original calcite-core on their test classpath with higher priority than hive-exec. ## I tried to remove original calcite-core dependency with but failed. ## Moving hive-exec dependency to the top of the list is my solution. > hive-exec packaging should shade guava > -- > > Key: HIVE-22126 > URL: https://issues.apache.org/jira/browse/HIVE-22126 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Eugene Chung >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22126.01.patch, HIVE-22126.02.patch, > HIVE-22126.03.patch, HIVE-22126.04.patch, HIVE-22126.05.patch, > HIVE-22126.06.patch > > > The ql/pom.xml includes complete guava library into hive-exec.jar > https://github.com/apache/hive/blob/master/ql/pom.xml#L990 This causes a > problems for downstream clients of hive which have hive-exec.jar in their > classpath since they are pinned to the same guava version as that of hive. > We should shade guava classes so that other components which depend on > hive-exec can independently use a different version of guava as needed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22973) Handle 0 length batches in LlapArrowRowRecordReader
[ https://issues.apache.org/jira/browse/HIVE-22973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050962#comment-17050962 ] Shubham Chaurasia commented on HIVE-22973: -- [~maheshk114] [~jdere] Can you please review ? > Handle 0 length batches in LlapArrowRowRecordReader > --- > > Key: HIVE-22973 > URL: https://issues.apache.org/jira/browse/HIVE-22973 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22973.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > In https://issues.apache.org/jira/browse/HIVE-22856, we allowed > {{LlapArrowBatchRecordReader}} to permit 0 length arrow batches. > {{LlapArrowRowRecordReader}} which is a wrapper over > {{LlapArrowBatchRecordReader}} should also handle this. > On one of the systems (cannot be reproduced easily) where we were running > test {{TestJdbcWithMiniLlapVectorArrow}}, we saw following exception - > {code:java} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.173 s <<< > FAILURE! - in org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow > testLlapInputFormatEndToEnd(org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow) > Time elapsed: 6.476 s <<< ERROR! > java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:80) > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:540) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:504) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.testLlapInputFormatEndToEnd(BaseJdbcWithMiniLlap.java:236) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:77) > ... 13 more > {code} > cc [~maheshk114] [~jdere] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22973) Handle 0 length batches in LlapArrowRowRecordReader
[ https://issues.apache.org/jira/browse/HIVE-22973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shubham Chaurasia updated HIVE-22973: - Description: In https://issues.apache.org/jira/browse/HIVE-22856, we allowed {{LlapArrowBatchRecordReader}} to permit 0 length arrow batches. {{LlapArrowRowRecordReader}} which is a wrapper over {{LlapArrowBatchRecordReader}} should also handle this. On one of the systems (cannot be reproduced easily) where we were running test {{TestJdbcWithMiniLlapVectorArrow}}, we saw following exception - {code:java} Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.173 s <<< FAILURE! - in org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow testLlapInputFormatEndToEnd(org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow) Time elapsed: 6.476 s <<< ERROR! java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:80) at org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41) at org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:540) at org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:504) at org.apache.hive.jdbc.BaseJdbcWithMiniLlap.testLlapInputFormatEndToEnd(BaseJdbcWithMiniLlap.java:236) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:77) ... 13 more {code} cc [~maheshk114] [~jdere] was: In https://issues.apache.org/jira/browse/HIVE-22856, we allowed {{LlapArrowBatchRecordReader}} to permit 0 length arrow batches. {{LlapArrowRowRecordReader}} which is a wrapper over {{LlapArrowBatchRecordReader}} should also handle this. On one of the systems (cannot be reproduced easily) where we were running test {{TestJdbcWithMiniLlapVectorArrow}}, we saw following exception - {code:java} Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.173 s <<< FAILURE! - in org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow testLlapInputFormatEndToEnd(org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow) Time elapsed: 6.476 s <<< ERROR! java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:80) at org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41) at org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:540) at org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:504) at org.apache.hive.jdbc.BaseJdbcWithMiniLlap.testLlapInputFormatEndToEnd(BaseJdbcWithMiniLlap.java:236) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:77) ... 13 more {code} > Handle 0 length batches in LlapArrowRowRecordReader > --- > > Key: HIVE-22973 > URL: https://issues.apache.org/jira/browse/HIVE-22973 > Project:
[jira] [Updated] (HIVE-22973) Handle 0 length batches in LlapArrowRowRecordReader
[ https://issues.apache.org/jira/browse/HIVE-22973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shubham Chaurasia updated HIVE-22973: - Attachment: HIVE-22973.01.patch Status: Patch Available (was: Open) > Handle 0 length batches in LlapArrowRowRecordReader > --- > > Key: HIVE-22973 > URL: https://issues.apache.org/jira/browse/HIVE-22973 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22973.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > In https://issues.apache.org/jira/browse/HIVE-22856, we allowed > {{LlapArrowBatchRecordReader}} to permit 0 length arrow batches. > {{LlapArrowRowRecordReader}} which is a wrapper over > {{LlapArrowBatchRecordReader}} should also handle this. > On one of the systems (cannot be reproduced easily) where we were running > test {{TestJdbcWithMiniLlapVectorArrow}}, we saw following exception - > {code:java} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.173 s <<< > FAILURE! - in org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow > testLlapInputFormatEndToEnd(org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow) > Time elapsed: 6.476 s <<< ERROR! > java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:80) > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:540) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:504) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.testLlapInputFormatEndToEnd(BaseJdbcWithMiniLlap.java:236) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:77) > ... 13 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22973) Handle 0 length batches in LlapArrowRowRecordReader
[ https://issues.apache.org/jira/browse/HIVE-22973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-22973: -- Labels: pull-request-available (was: ) > Handle 0 length batches in LlapArrowRowRecordReader > --- > > Key: HIVE-22973 > URL: https://issues.apache.org/jira/browse/HIVE-22973 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > > In https://issues.apache.org/jira/browse/HIVE-22856, we allowed > {{LlapArrowBatchRecordReader}} to permit 0 length arrow batches. > {{LlapArrowRowRecordReader}} which is a wrapper over > {{LlapArrowBatchRecordReader}} should also handle this. > On one of the systems (cannot be reproduced easily) where we were running > test {{TestJdbcWithMiniLlapVectorArrow}}, we saw following exception - > {code:java} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.173 s <<< > FAILURE! - in org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow > testLlapInputFormatEndToEnd(org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow) > Time elapsed: 6.476 s <<< ERROR! > java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:80) > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:540) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:504) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.testLlapInputFormatEndToEnd(BaseJdbcWithMiniLlap.java:236) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:77) > ... 13 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-22973) Handle 0 length batches in LlapArrowRowRecordReader
[ https://issues.apache.org/jira/browse/HIVE-22973?focusedWorklogId=397321=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-397321 ] ASF GitHub Bot logged work on HIVE-22973: - Author: ASF GitHub Bot Created on: 04/Mar/20 07:38 Start Date: 04/Mar/20 07:38 Worklog Time Spent: 10m Work Description: ShubhamChaurasia commented on pull request #934: HIVE-22973: Handle 0 length batches in LlapArrowRowRecordReader URL: https://github.com/apache/hive/pull/934 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 397321) Remaining Estimate: 0h Time Spent: 10m > Handle 0 length batches in LlapArrowRowRecordReader > --- > > Key: HIVE-22973 > URL: https://issues.apache.org/jira/browse/HIVE-22973 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > In https://issues.apache.org/jira/browse/HIVE-22856, we allowed > {{LlapArrowBatchRecordReader}} to permit 0 length arrow batches. > {{LlapArrowRowRecordReader}} which is a wrapper over > {{LlapArrowBatchRecordReader}} should also handle this. > On one of the systems (cannot be reproduced easily) where we were running > test {{TestJdbcWithMiniLlapVectorArrow}}, we saw following exception - > {code:java} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.173 s <<< > FAILURE! - in org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow > testLlapInputFormatEndToEnd(org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow) > Time elapsed: 6.476 s <<< ERROR! > java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:80) > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:540) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:504) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.testLlapInputFormatEndToEnd(BaseJdbcWithMiniLlap.java:236) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:77) > ... 13 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22973) Handle 0 length batches in LlapArrowRowRecordReader
[ https://issues.apache.org/jira/browse/HIVE-22973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shubham Chaurasia reassigned HIVE-22973: > Handle 0 length batches in LlapArrowRowRecordReader > --- > > Key: HIVE-22973 > URL: https://issues.apache.org/jira/browse/HIVE-22973 > Project: Hive > Issue Type: Bug > Components: llap >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > > In https://issues.apache.org/jira/browse/HIVE-22856, we allowed > {{LlapArrowBatchRecordReader}} to permit 0 length arrow batches. > {{LlapArrowRowRecordReader}} which is a wrapper over > {{LlapArrowBatchRecordReader}} should also handle this. > On one of the systems (cannot be reproduced easily) where we were running > test {{TestJdbcWithMiniLlapVectorArrow}}, we saw following exception - > {code:java} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.173 s <<< > FAILURE! - in org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow > testLlapInputFormatEndToEnd(org.apache.hive.jdbc.TestJdbcWithMiniLlapVectorArrow) > Time elapsed: 6.476 s <<< ERROR! > java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:80) > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:41) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:540) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.processQuery(BaseJdbcWithMiniLlap.java:504) > at > org.apache.hive.jdbc.BaseJdbcWithMiniLlap.testLlapInputFormatEndToEnd(BaseJdbcWithMiniLlap.java:236) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.llap.LlapArrowRowRecordReader.next(LlapArrowRowRecordReader.java:77) > ... 13 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22126) hive-exec packaging should shade guava
[ https://issues.apache.org/jira/browse/HIVE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Chung updated HIVE-22126: Status: Open (was: Patch Available) > hive-exec packaging should shade guava > -- > > Key: HIVE-22126 > URL: https://issues.apache.org/jira/browse/HIVE-22126 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Eugene Chung >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22126.01.patch, HIVE-22126.02.patch, > HIVE-22126.03.patch, HIVE-22126.04.patch, HIVE-22126.05.patch > > > The ql/pom.xml includes complete guava library into hive-exec.jar > https://github.com/apache/hive/blob/master/ql/pom.xml#L990 This causes a > problems for downstream clients of hive which have hive-exec.jar in their > classpath since they are pinned to the same guava version as that of hive. > We should shade guava classes so that other components which depend on > hive-exec can independently use a different version of guava as needed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050955#comment-17050955 ] Hive QA commented on HIVE-22954: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 56s{color} | {color:blue} parser in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 50s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 44s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20942/dev-support/hive-personality.sh | | git revision | master / 2b8a9b6 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20942/yetus/patch-asflicense-problems.txt | | modules | C: parser ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20942/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, > HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22947) The method getTableObjectsByName() in HiveMetaStoreClient.java is slow
[ https://issues.apache.org/jira/browse/HIVE-22947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Garg reassigned HIVE-22947: -- Assignee: Thejas Nair > The method getTableObjectsByName() in HiveMetaStoreClient.java is slow > -- > > Key: HIVE-22947 > URL: https://issues.apache.org/jira/browse/HIVE-22947 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Fang-Yu Rao >Assignee: Thejas Nair >Priority: Critical > Attachments: Benchmark_related_to_IMPALA-9363.pdf > > > The RPC of {{getTableObjectsByName()}} in {{HiveMetaStoreClient.java}} > ([https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L2111-L2114]) > is very slow. Specifically, according to an empirical evaluation, to load > the complete metadata of all the tables under a database consisting of 40,000 > tables, it takes at least 170 seconds for {{getTableObjectsByName()}} to > complete, whereas it only takes less than 0.5 second for {{getAllTables()}} > ([https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L2281-L2288]) > on the same machine. > In some use cases, not all the fields under the class of > {{org.apache.hadoop.hive.metastore.api.Table}} are required. For instance, if > a client would only like to determine the type of a table, e.g., an HDFS > table or a Kudu table, then it should suffice to only load the field of > {{sd}}, which is of class > {{org.apache.hadoop.hive.metastore.api.StorageDescriptor}}. It would be great > if {{getTableObjectsByName()}} could be made more fine-grained so that only > those required fields specified by the client are retrieved, which could also > possibly reduce the time spent on this RPC. > A spreadsheet is also attached ([^Benchmark_related_to_IMPALA-9363.pdf]), > where the detailed experimental results are provided. In the experiment, as a > client of Hive metastore, the {{catalogd}} of Impala calls > {{getTableObjectsByName()}} to retrieve the complete metadata of tables under > a database having 40,000 tables. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22972) Allow table id to be set for table creation requests
[ https://issues.apache.org/jira/browse/HIVE-22972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050939#comment-17050939 ] Miklos Gergely commented on HIVE-22972: --- [~jcamachorodriguez] please review. > Allow table id to be set for table creation requests > > > Key: HIVE-22972 > URL: https://issues.apache.org/jira/browse/HIVE-22972 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22972.01.patch > > > Hive Metastore should accept requests for table creation where the id is set, > ignoring it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Status: In Progress (was: Patch Available) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, > HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Attachment: HIVE-22954.04.patch Status: Patch Available (was: In Progress) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, > HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer
[ https://issues.apache.org/jira/browse/HIVE-21218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050935#comment-17050935 ] Hive QA commented on HIVE-21218: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995524/HIVE-21218.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 18096 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.client.TestRuntimeStats.testCleanup[Remote] (batchId=230) org.apache.hadoop.hive.metastore.client.TestRuntimeStats.testReading[Remote] (batchId=230) org.apache.hadoop.hive.metastore.client.TestRuntimeStats.testRuntimeStatHandling[Remote] (batchId=230) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/20941/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20941/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20941/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12995524 - PreCommit-HIVE-Build > KafkaSerDe doesn't support topics created via Confluent Avro serializer > --- > > Key: HIVE-21218 > URL: https://issues.apache.org/jira/browse/HIVE-21218 > Project: Hive > Issue Type: Bug > Components: kafka integration, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Milan Baran >Assignee: David McGinnis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, > HIVE-21218.4.patch, HIVE-21218.patch > > Time Spent: 5h > Remaining Estimate: 0h > > According to [Google > groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A] > the Confluent avro serialzier uses propertiary format for kafka value - > <4 bytes of schema ID> conforms to schema>. > This format does not cause any problem for Confluent kafka deserializer which > respect the format however for hive kafka handler its bit a problem to > correctly deserialize kafka value, because Hive uses custom deserializer from > bytes to objects and ignores kafka consumer ser/deser classes provided via > table property. > It would be nice to support Confluent format with magic byte. > Also it would be great to support Schema registry as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Attachment: HIVE-22954.03.patch Status: Patch Available (was: In Progress) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, > HIVE-22954.03.patch, HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Status: In Progress (was: Patch Available) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, > HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22972) Allow table id to be set for table creation requests
[ https://issues.apache.org/jira/browse/HIVE-22972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-22972: -- Status: Patch Available (was: Open) > Allow table id to be set for table creation requests > > > Key: HIVE-22972 > URL: https://issues.apache.org/jira/browse/HIVE-22972 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22972.01.patch > > > Hive Metastore should accept requests for table creation where the id is set, > ignoring it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22972) Allow table id to be set for table creation requests
[ https://issues.apache.org/jira/browse/HIVE-22972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-22972: -- Attachment: HIVE-22972.01.patch > Allow table id to be set for table creation requests > > > Key: HIVE-22972 > URL: https://issues.apache.org/jira/browse/HIVE-22972 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22972.01.patch > > > Hive Metastore should accept requests for table creation where the id is set, > ignoring it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22972) Allow table id to be set for table creation requests
[ https://issues.apache.org/jira/browse/HIVE-22972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely reassigned HIVE-22972: - > Allow table id to be set for table creation requests > > > Key: HIVE-22972 > URL: https://issues.apache.org/jira/browse/HIVE-22972 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > > Hive Metastore should accept requests for table creation where the id is set, > ignoring it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22962) Reuse HiveRelFieldTrimmer instance across queries
[ https://issues.apache.org/jira/browse/HIVE-22962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-22962: --- Attachment: HIVE-22962.03.patch > Reuse HiveRelFieldTrimmer instance across queries > - > > Key: HIVE-22962 > URL: https://issues.apache.org/jira/browse/HIVE-22962 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-22962.01.patch, HIVE-22962.02.patch, > HIVE-22962.03.patch, HIVE-22962.patch > > > Currently we create multiple {{HiveRelFieldTrimmer}} instances per query. > {{HiveRelFieldTrimmer}} uses a method dispatcher that has a built-in caching > mechanism: given a certain object, it stores the method that was called for > the object class. However, by instantiating the trimmer multiple times per > query and across queries, we create a new dispatcher with each instantiation, > thus effectively removing the caching mechanism that is built within the > dispatcher. > This issue is to reutilize the same {{HiveRelFieldTrimmer}} instance within a > single query and across queries. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Attachment: HIVE-22954.02.patch Status: Patch Available (was: In Progress) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, > HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Status: In Progress (was: Patch Available) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, > HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer
[ https://issues.apache.org/jira/browse/HIVE-21218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050902#comment-17050902 ] Hive QA commented on HIVE-21218: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} kafka-handler in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 17s{color} | {color:red} kafka-handler in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 17s{color} | {color:red} kafka-handler in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} kafka-handler: The patch generated 94 new + 1 unchanged - 0 fixed = 95 total (was 1) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s{color} | {color:red} kafka-handler in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20941/dev-support/hive-personality.sh | | git revision | master / 2b8a9b6 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-20941/yetus/patch-mvninstall-kafka-handler.txt | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-20941/yetus/patch-compile-kafka-handler.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-20941/yetus/patch-compile-kafka-handler.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-20941/yetus/diff-checkstyle-kafka-handler.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-20941/yetus/whitespace-eol.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-20941/yetus/patch-findbugs-kafka-handler.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20941/yetus/patch-asflicense-problems.txt | | modules | C: serde kafka-handler U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20941/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > KafkaSerDe doesn't support topics created via Confluent Avro serializer > --- > > Key: HIVE-21218 > URL: https://issues.apache.org/jira/browse/HIVE-21218 > Project: Hive > Issue Type: Bug >
[jira] [Commented] (HIVE-22962) Reuse HiveRelFieldTrimmer instance across queries
[ https://issues.apache.org/jira/browse/HIVE-22962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050895#comment-17050895 ] Hive QA commented on HIVE-22962: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995509/HIVE-22962.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 59 failed/errored test(s), 18096 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ambiguitycheck] (batchId=85) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cast1] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_2] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_aes_decrypt] (batchId=62) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_aes_encrypt] (batchId=100) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_crc32] (batchId=2) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_decode] (batchId=99) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_md5] (batchId=10) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_sha1] (batchId=8) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_sha2] (batchId=13) org.apache.hadoop.hive.cli.TestKuduCliDriver.testCliDriver[kudu_queries] (batchId=297) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_2] (batchId=175) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testAlterPartition (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testAlterTable (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testAlterTableCascade (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testAlterViewParititon (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testColumnStatistics (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testComplexTable (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testComplexTypeApi (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testConcurrentMetastores (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testCreateAndGetTableWithDriver (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testCreateTableSettingId (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testDBLocationChange (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testDBOwner (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testDBOwnerChange (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testDatabase (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testDatabaseLocation (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testDatabaseLocationWithPermissionProblems (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testDropDatabaseCascadeMVMultiDB (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testDropTable (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testFilterLastPartition (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testFilterSinglePartition (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testFunctionWithResources (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testGetConfigValue (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testGetMetastoreUuid (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testGetPartitionsWithSpec (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testGetSchemaWithNoClassDefFoundError (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testGetTableObjects (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testGetUUIDInParallel (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testJDOPersistanceManagerCleanup (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testListPartitionNames (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testListPartitions (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testListPartitionsWihtLimitEnabled (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testNameMethods (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testPartition (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testPartitionFilter (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testPartitionFilterLike (batchId=307) org.apache.hive.minikdc.TestRemoteHiveMetaStoreKerberos.testRenamePartition (batchId=307)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Attachment: HIVE-22954.01.patch Status: Patch Available (was: In Progress) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Status: In Progress (was: Patch Available) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.01.patch, HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22962) Reuse HiveRelFieldTrimmer instance across queries
[ https://issues.apache.org/jira/browse/HIVE-22962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050889#comment-17050889 ] Hive QA commented on HIVE-22962: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 59s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 13m 27s{color} | {color:red} branch/itests/hive-jmh cannot run convertXmlToText from findbugs {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 58 new + 129 unchanged - 0 fixed = 187 total (was 129) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 9s{color} | {color:red} patch/itests/hive-jmh cannot run convertXmlToText from findbugs {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20940/dev-support/hive-personality.sh | | git revision | master / 2b8a9b6 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-20940/yetus/branch-findbugs-itests_hive-jmh.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-20940/yetus/diff-checkstyle-ql.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-20940/yetus/patch-findbugs-itests_hive-jmh.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20940/yetus/patch-asflicense-problems.txt | | modules | C: ql itests/hive-jmh U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20940/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Reuse HiveRelFieldTrimmer instance across queries > - > > Key: HIVE-22962 > URL: https://issues.apache.org/jira/browse/HIVE-22962 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-22962.01.patch, HIVE-22962.02.patch, > HIVE-22962.patch > > > Currently we create multiple {{HiveRelFieldTrimmer}} instances per query. > {{HiveRelFieldTrimmer}} uses a method dispatcher that has a built-in caching > mechanism: given a certain object, it stores the
[jira] [Commented] (HIVE-21851) FireEventResponse should include event id when available
[ https://issues.apache.org/jira/browse/HIVE-21851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050746#comment-17050746 ] Hive QA commented on HIVE-21851: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995501/HIVE-21851.05.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18096 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[topnkey_grouping_sets] (batchId=1) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/20939/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20939/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20939/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12995501 - PreCommit-HIVE-Build > FireEventResponse should include event id when available > > > Key: HIVE-21851 > URL: https://issues.apache.org/jira/browse/HIVE-21851 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Minor > Attachments: HIVE-21851.01.patch, HIVE-21851.02.patch, > HIVE-21851.03.patch, HIVE-21851.04.patch, HIVE-21851.05.patch > > > The metastore API {{fire_listener_event}} gives clients the ability to fire a > INSERT event on DML operations. However, the returned response is empty > struct. It would be useful to sent back the event id information in the > response so that clients can take actions based of the event id. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22963) HiveParser misinterpretes quotes in parameters of built-in functions or UDFs
[ https://issues.apache.org/jira/browse/HIVE-22963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050730#comment-17050730 ] Ganesha Shreedhara commented on HIVE-22963: --- [~pxiong] Can you please help with understanding if this is an expected behaviour? Does it require escaping double quote if it is is enclosed between single quotes in the parameter of a function? Also SelectClauseParser is able to parse the SelectExpression here, the exception is actually thrown by FromClauseParser even though the escaping is required in SelectExpression. I suspect that this behaviour is because of the changes done as part of https://issues.apache.org/jira/browse/HIVE-12764. Please check and advise. > HiveParser misinterpretes quotes in parameters of built-in functions or UDFs > > > Key: HIVE-22963 > URL: https://issues.apache.org/jira/browse/HIVE-22963 > Project: Hive > Issue Type: Bug > Components: Parser >Affects Versions: 3.1.1, 2.3.6 >Reporter: Ganesha Shreedhara >Priority: Major > > Parsing of query fails when we use single or double quotes in from/to string > of translate function in 2.3*/3.1.1 version of hive. Parsing of the same > query is successful in 2.1.1 version of hive. > *Steps to reproduce:* > > {code:java} > CREATE TABLE test_table (data string); > INSERT INTO test_table VALUES("d\"a\"t\"a"); > select translate(data, '"', '') from test_table; > {code} > > > Parsing fails with the following exception: > {code:java} > NoViableAltException(355@[157:5: ( ( Identifier LPAREN )=> > partitionedTableFunction | tableSource | subQuerySource | virtualTableSource > )])NoViableAltException(355@[157:5: ( ( Identifier LPAREN )=> > partitionedTableFunction | tableSource | subQuerySource | virtualTableSource > )]) at org.antlr.runtime.DFA.noViableAlt(DFA.java:158) at > org.antlr.runtime.DFA.predict(DFA.java:116) at > org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromSource0(HiveParser_FromClauseParser.java:2942) > at > org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromSource(HiveParser_FromClauseParser.java:2880) > at > org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.joinSource(HiveParser_FromClauseParser.java:1451) > at > org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromClause(HiveParser_FromClauseParser.java:1341) > at > org.apache.hadoop.hive.ql.parse.HiveParser.fromClause(HiveParser.java:45811) > at > org.apache.hadoop.hive.ql.parse.HiveParser.atomSelectStatement(HiveParser.java:39699) > at > org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:39951) > at > org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:39597) > at > org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:38786) > at > org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:38674) > at > org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:2340) > at > org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1369) at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:208) at > org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:77) at > org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:70) at > org.apache.hadoop.hive.ql.Driver.compile(Driver.java:507) at > org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1388) at > org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1528) at > org.apache.hadoop.hive.ql.Driver.run(Driver.java:1308) at > org.apache.hadoop.hive.ql.Driver.run(Driver.java:1298) at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:276) at > org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:221) at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:465) at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:992) at > org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:916) at > org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:795) at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.apache.hadoop.util.RunJar.run(RunJar.java:223) at > org.apache.hadoop.util.RunJar.main(RunJar.java:136)FAILED: ParseException > line 1:40 cannot recognize input near 'tt' ';' '' in from source > 0org.apache.hadoop.hive.ql.parse.ParseException: line 1:40 cannot recognize > input near 'tt' ';' '' in from source 0 at >
[jira] [Commented] (HIVE-21851) FireEventResponse should include event id when available
[ https://issues.apache.org/jira/browse/HIVE-21851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050720#comment-17050720 ] Hive QA commented on HIVE-21851: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 35s{color} | {color:blue} standalone-metastore/metastore-common in master has 35 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 15s{color} | {color:blue} standalone-metastore/metastore-server in master has 185 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 29s{color} | {color:red} standalone-metastore/metastore-server generated 1 new + 185 unchanged - 0 fixed = 186 total (was 185) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 14s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:standalone-metastore/metastore-server | | | Boxing/unboxing to parse a primitive org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(FireEventRequest) At HiveMetaStore.java:org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.fire_listener_event(FireEventRequest) At HiveMetaStore.java:[line 8623] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20939/dev-support/hive-personality.sh | | git revision | master / 2b8a9b6 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-20939/yetus/new-findbugs-standalone-metastore_metastore-server.html | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20939/yetus/patch-asflicense-problems.txt | | modules | C: standalone-metastore/metastore-common standalone-metastore/metastore-server itests/hcatalog-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20939/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > FireEventResponse should include event id when available > > > Key: HIVE-21851 > URL: https://issues.apache.org/jira/browse/HIVE-21851 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Minor >
[jira] [Updated] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer
[ https://issues.apache.org/jira/browse/HIVE-21218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David McGinnis updated HIVE-21218: -- Attachment: HIVE-21218.4.patch > KafkaSerDe doesn't support topics created via Confluent Avro serializer > --- > > Key: HIVE-21218 > URL: https://issues.apache.org/jira/browse/HIVE-21218 > Project: Hive > Issue Type: Bug > Components: kafka integration, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Milan Baran >Assignee: David McGinnis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, > HIVE-21218.4.patch, HIVE-21218.patch > > Time Spent: 5h > Remaining Estimate: 0h > > According to [Google > groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A] > the Confluent avro serialzier uses propertiary format for kafka value - > <4 bytes of schema ID> conforms to schema>. > This format does not cause any problem for Confluent kafka deserializer which > respect the format however for hive kafka handler its bit a problem to > correctly deserialize kafka value, because Hive uses custom deserializer from > bytes to objects and ignores kafka consumer ser/deser classes provided via > table property. > It would be nice to support Confluent format with magic byte. > Also it would be great to support Schema registry as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer
[ https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=397193=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-397193 ] ASF GitHub Bot logged work on HIVE-21218: - Author: ASF GitHub Bot Created on: 04/Mar/20 02:36 Start Date: 04/Mar/20 02:36 Worklog Time Spent: 10m Work Description: davidov541 commented on issue #933: HIVE-21218: Adding support for Confluent Kafka Avro message format URL: https://github.com/apache/hive/pull/933#issuecomment-594289581 @cricket007 @b-slim : Please take a look and let me know what you think. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 397193) Time Spent: 5h (was: 4h 50m) > KafkaSerDe doesn't support topics created via Confluent Avro serializer > --- > > Key: HIVE-21218 > URL: https://issues.apache.org/jira/browse/HIVE-21218 > Project: Hive > Issue Type: Bug > Components: kafka integration, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Milan Baran >Assignee: David McGinnis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, HIVE-21218.patch > > Time Spent: 5h > Remaining Estimate: 0h > > According to [Google > groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A] > the Confluent avro serialzier uses propertiary format for kafka value - > <4 bytes of schema ID> conforms to schema>. > This format does not cause any problem for Confluent kafka deserializer which > respect the format however for hive kafka handler its bit a problem to > correctly deserialize kafka value, because Hive uses custom deserializer from > bytes to objects and ignores kafka consumer ser/deser classes provided via > table property. > It would be nice to support Confluent format with magic byte. > Also it would be great to support Schema registry as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer
[ https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=397192=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-397192 ] ASF GitHub Bot logged work on HIVE-21218: - Author: ASF GitHub Bot Created on: 04/Mar/20 02:35 Start Date: 04/Mar/20 02:35 Worklog Time Spent: 10m Work Description: davidov541 commented on pull request #933: HIVE-21218: Adding support for Confluent Kafka Avro message format URL: https://github.com/apache/hive/pull/933 Adds support for indicating a number of bytes at the beginning of each message to ignore. This is added in order to support Confluent's Avro message format for Kafka, which has five magic bytes at the beginning of each message. A pre-defined confluent format is given as well, which automatically skips the first five bytes. This is a resubmission of https://github.com/apache/hive/pull/526, which had been abandoned. Comments in that thread have been applied. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 397192) Time Spent: 4h 50m (was: 4h 40m) > KafkaSerDe doesn't support topics created via Confluent Avro serializer > --- > > Key: HIVE-21218 > URL: https://issues.apache.org/jira/browse/HIVE-21218 > Project: Hive > Issue Type: Bug > Components: kafka integration, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Milan Baran >Assignee: David McGinnis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, HIVE-21218.patch > > Time Spent: 4h 50m > Remaining Estimate: 0h > > According to [Google > groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A] > the Confluent avro serialzier uses propertiary format for kafka value - > <4 bytes of schema ID> conforms to schema>. > This format does not cause any problem for Confluent kafka deserializer which > respect the format however for hive kafka handler its bit a problem to > correctly deserialize kafka value, because Hive uses custom deserializer from > bytes to objects and ignores kafka consumer ser/deser classes provided via > table property. > It would be nice to support Confluent format with magic byte. > Also it would be great to support Schema registry as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer
[ https://issues.apache.org/jira/browse/HIVE-21218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David McGinnis updated HIVE-21218: -- Attachment: HIVE-21218.3.patch > KafkaSerDe doesn't support topics created via Confluent Avro serializer > --- > > Key: HIVE-21218 > URL: https://issues.apache.org/jira/browse/HIVE-21218 > Project: Hive > Issue Type: Bug > Components: kafka integration, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Milan Baran >Assignee: David McGinnis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, HIVE-21218.patch > > Time Spent: 4h 40m > Remaining Estimate: 0h > > According to [Google > groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A] > the Confluent avro serialzier uses propertiary format for kafka value - > <4 bytes of schema ID> conforms to schema>. > This format does not cause any problem for Confluent kafka deserializer which > respect the format however for hive kafka handler its bit a problem to > correctly deserialize kafka value, because Hive uses custom deserializer from > bytes to objects and ignores kafka consumer ser/deser classes provided via > table property. > It would be nice to support Confluent format with magic byte. > Also it would be great to support Schema registry as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22786) Vectorization: Agg with distinct can be optimised in HASH mode
[ https://issues.apache.org/jira/browse/HIVE-22786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050694#comment-17050694 ] Hive QA commented on HIVE-22786: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995499/HIVE-22786.10.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 18096 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/20938/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20938/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20938/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12995499 - PreCommit-HIVE-Build > Vectorization: Agg with distinct can be optimised in HASH mode > -- > > Key: HIVE-22786 > URL: https://issues.apache.org/jira/browse/HIVE-22786 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Rajesh Balamohan >Assignee: Ramesh Kumar Thangarajan >Priority: Minor > Attachments: HIVE-22786.1.patch, HIVE-22786.10.patch, > HIVE-22786.2.patch, HIVE-22786.3.patch, HIVE-22786.4.wip.patch, > HIVE-22786.5.patch, HIVE-22786.6.patch, HIVE-22786.7.patch, > HIVE-22786.8.patch, HIVE-22786.9.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22967) Support hive.reloadable.aux.jars.path for Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-22967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050692#comment-17050692 ] Toshihiko Uchida commented on HIVE-22967: - The first patch localizes reloadable jars just like HIVE-14037 and HIVE-14142. Let me fix the checkstyle warning. {code} ./ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java:1098:String allFiles = HiveStringUtils.joinIgnoringEmpty(new String[]{auxJars, reloadableAuxJars, addedJars, addedFiles}, ',');: warning: Line is longer than 120 characters (found 126). ./ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java:1119:String allFiles = HiveStringUtils.joinIgnoringEmpty(new String[]{auxJars, reloadableAuxJars, addedJars, addedFiles}, ',');: warning: Line is longer than 120 characters (found 126). {code} > Support hive.reloadable.aux.jars.path for Hive on Tez > - > > Key: HIVE-22967 > URL: https://issues.apache.org/jira/browse/HIVE-22967 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.2, 2.3.6 >Reporter: Toshihiko Uchida >Assignee: Toshihiko Uchida >Priority: Minor > Attachments: HIVE-22967.1.patch > > > The jars in hive.reloadable.aux.jars.path are not localized in Tez containers. > As a result, any query utilizing those reloadable jars fails for Hive on Tez > due to ClassNotFoundException. > {code} > Error: Error while processing statement: FAILED: Execution Error, return code > 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, > vertexName=Map 1, vertexId=vertex_1578856704640_0087_1_00, diagnostics=[Task > failed, taskId=task_1578856704640_0087_1_00_01, diagnostics=[TaskAttempt > 0 failed, info=[Error: Error while running task ( failure) : > attempt_1578856704640_0087_1_00_01_0:java.lang.RuntimeException: > java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:354) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.RuntimeException: java.lang.ClassNotFoundException: > com.example.hive.udf.Lower > at > org.apache.hadoop.hive.ql.exec.FilterOperator.initializeOp(FilterOperator.java:71) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.initializeOp(VectorFilterOperator.java:83) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:573) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:525) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:386) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.initializeMapOperator(VectorMapOperator.java:591) > at >
[jira] [Commented] (HIVE-22966) LLAP: Consider including waitTime for comparing attempts in same vertex
[ https://issues.apache.org/jira/browse/HIVE-22966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050680#comment-17050680 ] Rajesh Balamohan commented on HIVE-22966: - It does depending on the wait time. Wait time is used as proxy to schedule the attempts. For e.g, without the patch, longest wait time of the attempt was 1430 ms with running time of 528 ms (total of 1900+ms) in Q55. With patch, longest wait time was the attempt was 741 ms with running time of 700 ms (total of 1500ms). Depending on when the attempt gets scheduled, it impacts overall runtime of the vertex. Patch reduces starvation period for the task by fair comparison with wait time. > LLAP: Consider including waitTime for comparing attempts in same vertex > --- > > Key: HIVE-22966 > URL: https://issues.apache.org/jira/browse/HIVE-22966 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-22966.3.patch, HIVE-22966.4.patch > > > When attempts are compared within same vertex, it should pick up the attempt > with longest wait time to avoid starvation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22786) Vectorization: Agg with distinct can be optimised in HASH mode
[ https://issues.apache.org/jira/browse/HIVE-22786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050672#comment-17050672 ] Hive QA commented on HIVE-22786: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 52s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 1 new + 404 unchanged - 0 fixed = 405 total (was 404) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20938/dev-support/hive-personality.sh | | git revision | master / 2b8a9b6 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-20938/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20938/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20938/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: Agg with distinct can be optimised in HASH mode > -- > > Key: HIVE-22786 > URL: https://issues.apache.org/jira/browse/HIVE-22786 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Rajesh Balamohan >Assignee: Ramesh Kumar Thangarajan >Priority: Minor > Attachments: HIVE-22786.1.patch, HIVE-22786.10.patch, > HIVE-22786.2.patch, HIVE-22786.3.patch, HIVE-22786.4.wip.patch, > HIVE-22786.5.patch, HIVE-22786.6.patch, HIVE-22786.7.patch, > HIVE-22786.8.patch, HIVE-22786.9.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query
[ https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050664#comment-17050664 ] Hive QA commented on HIVE-21778: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995488/HIVE-21778.5.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18096 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[structin] (batchId=37) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/20937/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20937/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20937/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12995488 - PreCommit-HIVE-Build > CBO: "Struct is not null" gets evaluated as `nullable` always causing filter > miss in the query > -- > > Key: HIVE-21778 > URL: https://issues.apache.org/jira/browse/HIVE-21778 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 4.0.0, 2.3.5 >Reporter: Rajesh Balamohan >Assignee: Vineet Garg >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, > HIVE-21778.3.patch, HIVE-21778.4.patch, HIVE-21778.5.patch, test_null.q, > test_null.q.out > > Time Spent: 40m > Remaining Estimate: 0h > > {noformat} > drop table if exists test_struct; > CREATE external TABLE test_struct > ( > f1 string, > demo_struct struct, > datestr string > ); > set hive.cbo.enable=true; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note > that demo_struct filter is not added here > Filter Operator > predicate: (datestr = '2019-01-01') (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > set hive.cbo.enable=false; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) <- Note that demo_struct filter is added when CBO is > turned off > Filter Operator > predicate: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > {noformat} > In CalcitePlanner::genFilterRelNode, the following code misses to evaluate > this filter. > {noformat} > RexNode factoredFilterExpr = RexUtil > .pullFactors(cluster.getRexBuilder(), convertedFilterExpr); > {noformat} > Note that even if we add `demo_struct.f1` it would end up pushing the filter > correctly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22947) The method getTableObjectsByName() in HiveMetaStoreClient.java is slow
[ https://issues.apache.org/jira/browse/HIVE-22947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Garg updated HIVE-22947: --- Priority: Critical (was: Major) > The method getTableObjectsByName() in HiveMetaStoreClient.java is slow > -- > > Key: HIVE-22947 > URL: https://issues.apache.org/jira/browse/HIVE-22947 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Fang-Yu Rao >Priority: Critical > Attachments: Benchmark_related_to_IMPALA-9363.pdf > > > The RPC of {{getTableObjectsByName()}} in {{HiveMetaStoreClient.java}} > ([https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L2111-L2114]) > is very slow. Specifically, according to an empirical evaluation, to load > the complete metadata of all the tables under a database consisting of 40,000 > tables, it takes at least 170 seconds for {{getTableObjectsByName()}} to > complete, whereas it only takes less than 0.5 second for {{getAllTables()}} > ([https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L2281-L2288]) > on the same machine. > In some use cases, not all the fields under the class of > {{org.apache.hadoop.hive.metastore.api.Table}} are required. For instance, if > a client would only like to determine the type of a table, e.g., an HDFS > table or a Kudu table, then it should suffice to only load the field of > {{sd}}, which is of class > {{org.apache.hadoop.hive.metastore.api.StorageDescriptor}}. It would be great > if {{getTableObjectsByName()}} could be made more fine-grained so that only > those required fields specified by the client are retrieved, which could also > possibly reduce the time spent on this RPC. > A spreadsheet is also attached ([^Benchmark_related_to_IMPALA-9363.pdf]), > where the detailed experimental results are provided. In the experiment, as a > client of Hive metastore, the {{catalogd}} of Impala calls > {{getTableObjectsByName()}} to retrieve the complete metadata of tables under > a database having 40,000 tables. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query
[ https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050641#comment-17050641 ] Hive QA commented on HIVE-21778: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 53s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20937/dev-support/hive-personality.sh | | git revision | master / 2b8a9b6 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20937/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20937/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > CBO: "Struct is not null" gets evaluated as `nullable` always causing filter > miss in the query > -- > > Key: HIVE-21778 > URL: https://issues.apache.org/jira/browse/HIVE-21778 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 4.0.0, 2.3.5 >Reporter: Rajesh Balamohan >Assignee: Vineet Garg >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, > HIVE-21778.3.patch, HIVE-21778.4.patch, HIVE-21778.5.patch, test_null.q, > test_null.q.out > > Time Spent: 40m > Remaining Estimate: 0h > > {noformat} > drop table if exists test_struct; > CREATE external TABLE test_struct > ( > f1 string, > demo_struct struct, > datestr string > ); > set hive.cbo.enable=true; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note > that demo_struct filter is not added here > Filter Operator > predicate: (datestr = '2019-01-01') (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 >
[jira] [Updated] (HIVE-22929) Performance: quoted identifier parsing uses throwaway Regex via String.replaceAll()
[ https://issues.apache.org/jira/browse/HIVE-22929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-22929: --- Fix Version/s: 4.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to master, thanks [~kkasa] > Performance: quoted identifier parsing uses throwaway Regex via > String.replaceAll() > --- > > Key: HIVE-22929 > URL: https://issues.apache.org/jira/browse/HIVE-22929 > Project: Hive > Issue Type: Bug >Reporter: Gopal Vijayaraghavan >Assignee: Krisztian Kasa >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22929.1.patch, HIVE-22929.2.patch, > HIVE-22929.2.patch, HIVE-22929.2.patch, HIVE-22929.2.patch, > String.replaceAll.png > > > !String.replaceAll.png! > https://github.com/apache/hive/blob/master/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g#L530 > {code} > '`' ( '``' | ~('`') )* '`' { setText(getText().substring(1, > getText().length() -1 ).replaceAll("``", "`")); } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22929) Performance: quoted identifier parsing uses throwaway Regex via String.replaceAll()
[ https://issues.apache.org/jira/browse/HIVE-22929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050614#comment-17050614 ] Jesus Camacho Rodriguez commented on HIVE-22929: +1 > Performance: quoted identifier parsing uses throwaway Regex via > String.replaceAll() > --- > > Key: HIVE-22929 > URL: https://issues.apache.org/jira/browse/HIVE-22929 > Project: Hive > Issue Type: Bug >Reporter: Gopal Vijayaraghavan >Assignee: Krisztian Kasa >Priority: Major > Attachments: HIVE-22929.1.patch, HIVE-22929.2.patch, > HIVE-22929.2.patch, HIVE-22929.2.patch, HIVE-22929.2.patch, > String.replaceAll.png > > > !String.replaceAll.png! > https://github.com/apache/hive/blob/master/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g#L530 > {code} > '`' ( '``' | ~('`') )* '`' { setText(getText().substring(1, > getText().length() -1 ).replaceAll("``", "`")); } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-22962) Reuse HiveRelFieldTrimmer instance across queries
[ https://issues.apache.org/jira/browse/HIVE-22962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050612#comment-17050612 ] Jesus Camacho Rodriguez edited comment on HIVE-22962 at 3/3/20 10:37 PM: - Latest patch includes a very simple benchmark {{FieldTrimmerBench}} that applies the trimmer on a 10 operator plan repeatedly. {noformat} Benchmark Mode Cnt Score Error Units FieldTrimmerBench.baseRelFieldTrimmer thrpt 10 0.088 ± 0.002 ops/us FieldTrimmerBench.modBaseRelFieldTrimmer thrpt 10 8.292 ± 0.138 ops/us FieldTrimmerBench.hiveRelFieldTrimmer thrpt 10 10.182 ± 0.117 ops/us FieldTrimmerBench.baseRelFieldTrimmer avgt 10 10.548 ± 0.148 us/op FieldTrimmerBench.modBaseRelFieldTrimmer avgt 10 0.116 ± 0.002 us/op FieldTrimmerBench.hiveRelFieldTrimmer avgt 10 0.109 ± 0.001 us/op {noformat} - baseRelFieldTrimmer : Calcite implementation that instantiates trimmer on every call and thus does not exploit method resolution caching. It uses reflection. - modBaseRelFieldTrimmer : Modified implementation that can use the same trimmer instance and thus exploits method resolution caching. It uses reflection. - hiveRelFieldTrimmer : Modified implementation that can use the same trimmer instance and thus exploits method resolution caching. It uses lambda metafactory. was (Author: jcamachorodriguez): Latest patch includes a very simple benchmark {{FieldTrimmerBench}} that applies the trimmer on a 10 operator plan repeatedly. {noformat} Benchmark Mode Cnt Score Error Units FieldTrimmerBench.baseRelFieldTrimmer thrpt 10 0.088 ± 0.002 ops/us FieldTrimmerBench.hiveRelFieldTrimmer thrpt 10 10.182 ± 0.117 ops/us FieldTrimmerBench.modBaseRelFieldTrimmer thrpt 10 8.292 ± 0.138 ops/us FieldTrimmerBench.baseRelFieldTrimmer avgt 10 10.548 ± 0.148 us/op FieldTrimmerBench.hiveRelFieldTrimmer avgt 10 0.109 ± 0.001 us/op FieldTrimmerBench.modBaseRelFieldTrimmer avgt 10 0.116 ± 0.002 us/op {noformat} - baseRelFieldTrimmer : Calcite implementation that instantiates trimmer on every call and thus does not exploit method resolution caching. It uses reflection. - modBaseRelFieldTrimmer : Modified implementation that can use the same trimmer instance and thus exploits method resolution caching. It uses reflection. - hiveRelFieldTrimmer : Modified implementation that can use the same trimmer instance and thus exploits method resolution caching. It uses lambda metafactory. > Reuse HiveRelFieldTrimmer instance across queries > - > > Key: HIVE-22962 > URL: https://issues.apache.org/jira/browse/HIVE-22962 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-22962.01.patch, HIVE-22962.02.patch, > HIVE-22962.patch > > > Currently we create multiple {{HiveRelFieldTrimmer}} instances per query. > {{HiveRelFieldTrimmer}} uses a method dispatcher that has a built-in caching > mechanism: given a certain object, it stores the method that was called for > the object class. However, by instantiating the trimmer multiple times per > query and across queries, we create a new dispatcher with each instantiation, > thus effectively removing the caching mechanism that is built within the > dispatcher. > This issue is to reutilize the same {{HiveRelFieldTrimmer}} instance within a > single query and across queries. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22962) Reuse HiveRelFieldTrimmer instance across queries
[ https://issues.apache.org/jira/browse/HIVE-22962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050612#comment-17050612 ] Jesus Camacho Rodriguez commented on HIVE-22962: Latest patch includes a very simple benchmark {{FieldTrimmerBench}} that applies the trimmer on a 10 operator plan repeatedly. {noformat} Benchmark Mode Cnt Score Error Units FieldTrimmerBench.baseRelFieldTrimmer thrpt 10 0.088 ± 0.002 ops/us FieldTrimmerBench.hiveRelFieldTrimmer thrpt 10 10.182 ± 0.117 ops/us FieldTrimmerBench.modBaseRelFieldTrimmer thrpt 10 8.292 ± 0.138 ops/us FieldTrimmerBench.baseRelFieldTrimmer avgt 10 10.548 ± 0.148 us/op FieldTrimmerBench.hiveRelFieldTrimmer avgt 10 0.109 ± 0.001 us/op FieldTrimmerBench.modBaseRelFieldTrimmer avgt 10 0.116 ± 0.002 us/op {noformat} - baseRelFieldTrimmer : Calcite implementation that instantiates trimmer on every call and thus does not exploit method resolution caching. It uses reflection. - modBaseRelFieldTrimmer : Modified implementation that can use the same trimmer instance and thus exploits method resolution caching. It uses reflection. - hiveRelFieldTrimmer : Modified implementation that can use the same trimmer instance and thus exploits method resolution caching. It uses lambda metafactory. > Reuse HiveRelFieldTrimmer instance across queries > - > > Key: HIVE-22962 > URL: https://issues.apache.org/jira/browse/HIVE-22962 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-22962.01.patch, HIVE-22962.02.patch, > HIVE-22962.patch > > > Currently we create multiple {{HiveRelFieldTrimmer}} instances per query. > {{HiveRelFieldTrimmer}} uses a method dispatcher that has a built-in caching > mechanism: given a certain object, it stores the method that was called for > the object class. However, by instantiating the trimmer multiple times per > query and across queries, we create a new dispatcher with each instantiation, > thus effectively removing the caching mechanism that is built within the > dispatcher. > This issue is to reutilize the same {{HiveRelFieldTrimmer}} instance within a > single query and across queries. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22962) Reuse HiveRelFieldTrimmer instance across queries
[ https://issues.apache.org/jira/browse/HIVE-22962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-22962: --- Attachment: HIVE-22962.02.patch > Reuse HiveRelFieldTrimmer instance across queries > - > > Key: HIVE-22962 > URL: https://issues.apache.org/jira/browse/HIVE-22962 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-22962.01.patch, HIVE-22962.02.patch, > HIVE-22962.patch > > > Currently we create multiple {{HiveRelFieldTrimmer}} instances per query. > {{HiveRelFieldTrimmer}} uses a method dispatcher that has a built-in caching > mechanism: given a certain object, it stores the method that was called for > the object class. However, by instantiating the trimmer multiple times per > query and across queries, we create a new dispatcher with each instantiation, > thus effectively removing the caching mechanism that is built within the > dispatcher. > This issue is to reutilize the same {{HiveRelFieldTrimmer}} instance within a > single query and across queries. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22126) hive-exec packaging should shade guava
[ https://issues.apache.org/jira/browse/HIVE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050609#comment-17050609 ] Hive QA commented on HIVE-22126: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 51s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 34s{color} | {color:blue} common in master has 63 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 48s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s{color} | {color:red} ql: The patch generated 1 new + 44 unchanged - 1 fixed = 45 total (was 45) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 4s{color} | {color:red} ql generated 1 new + 1530 unchanged - 1 fixed = 1531 total (was 1531) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveAggregate.deriveRowType(RelDataTypeFactory, RelDataType, boolean, ImmutableBitSet, List) concatenates strings using + in a loop At HiveAggregate.java:concatenates strings using + in a loop At HiveAggregate.java:[line 104] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20936/dev-support/hive-personality.sh | | git revision | master / 9cdf97f | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-20936/yetus/diff-checkstyle-ql.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-20936/yetus/new-findbugs-ql.html | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20936/yetus/patch-asflicense-problems.txt | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20936/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > hive-exec packaging should shade guava > -- > > Key: HIVE-22126 > URL: https://issues.apache.org/jira/browse/HIVE-22126 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Eugene Chung >Priority: Major >
[jira] [Commented] (HIVE-22967) Support hive.reloadable.aux.jars.path for Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-22967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050591#comment-17050591 ] Hive QA commented on HIVE-22967: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995471/HIVE-22967.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 18096 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/20935/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20935/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20935/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12995471 - PreCommit-HIVE-Build > Support hive.reloadable.aux.jars.path for Hive on Tez > - > > Key: HIVE-22967 > URL: https://issues.apache.org/jira/browse/HIVE-22967 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.2, 2.3.6 >Reporter: Toshihiko Uchida >Assignee: Toshihiko Uchida >Priority: Minor > Attachments: HIVE-22967.1.patch > > > The jars in hive.reloadable.aux.jars.path are not localized in Tez containers. > As a result, any query utilizing those reloadable jars fails for Hive on Tez > due to ClassNotFoundException. > {code} > Error: Error while processing statement: FAILED: Execution Error, return code > 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, > vertexName=Map 1, vertexId=vertex_1578856704640_0087_1_00, diagnostics=[Task > failed, taskId=task_1578856704640_0087_1_00_01, diagnostics=[TaskAttempt > 0 failed, info=[Error: Error while running task ( failure) : > attempt_1578856704640_0087_1_00_01_0:java.lang.RuntimeException: > java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:354) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.RuntimeException: java.lang.ClassNotFoundException: > com.example.hive.udf.Lower > at > org.apache.hadoop.hive.ql.exec.FilterOperator.initializeOp(FilterOperator.java:71) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.initializeOp(VectorFilterOperator.java:83) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:573) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:525) > at >
[jira] [Commented] (HIVE-22967) Support hive.reloadable.aux.jars.path for Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-22967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050549#comment-17050549 ] Hive QA commented on HIVE-22967: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 54s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 2 new + 41 unchanged - 0 fixed = 43 total (was 41) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20935/dev-support/hive-personality.sh | | git revision | master / 9cdf97f | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-20935/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20935/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20935/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Support hive.reloadable.aux.jars.path for Hive on Tez > - > > Key: HIVE-22967 > URL: https://issues.apache.org/jira/browse/HIVE-22967 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.2, 2.3.6 >Reporter: Toshihiko Uchida >Assignee: Toshihiko Uchida >Priority: Minor > Attachments: HIVE-22967.1.patch > > > The jars in hive.reloadable.aux.jars.path are not localized in Tez containers. > As a result, any query utilizing those reloadable jars fails for Hive on Tez > due to ClassNotFoundException. > {code} > Error: Error while processing statement: FAILED: Execution Error, return code > 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, > vertexName=Map 1, vertexId=vertex_1578856704640_0087_1_00, diagnostics=[Task > failed, taskId=task_1578856704640_0087_1_00_01, diagnostics=[TaskAttempt > 0 failed, info=[Error: Error while running task ( failure) : > attempt_1578856704640_0087_1_00_01_0:java.lang.RuntimeException: > java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at >
[jira] [Commented] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050527#comment-17050527 ] Hive QA commented on HIVE-22954: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995469/HIVE-22954.patch {color:green}SUCCESS:{color} +1 due to 19 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 281 failed/errored test(s), 18096 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[repl_load_old_version] (batchId=26) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[repl_dump_requires_admin] (batchId=107) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[repl_load_requires_admin] (batchId=107) org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence (batchId=252) org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate (batchId=283) org.apache.hadoop.hive.ql.parse.TestMetaStoreEventListenerInRepl.testReplEvents (batchId=270) org.apache.hadoop.hive.ql.parse.TestParseUtils.testTxnTypeWithDisabledReadOnlyFeature[15] (batchId=336) org.apache.hadoop.hive.ql.parse.TestParseUtils.testTxnTypeWithEnabledReadOnlyFeature[15] (batchId=336) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage.retryIncBootstrapAcidFromDifferentDumpWithoutCleanTablesConfig (batchId=259) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage.testAcidTablesBootstrapDuringIncremental (batchId=259) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage.testAcidTablesBootstrapDuringIncrementalWithOpenTxnsTimeout (batchId=259) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites (batchId=259) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage.testRetryAcidTablesBootstrapFromDifferentDump (batchId=259) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testAbortTxnEvent (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testAcidBootstrapReplLoadRetryAfterFailure (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testAcidTablesBootstrap (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testAcidTablesBootstrapWithConcurrentDropTable (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testAcidTablesBootstrapWithConcurrentWrites (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testAcidTablesBootstrapWithOpenTxnsTimeout (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testAcidTablesMoveOptimizationBootStrap (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testAcidTablesMoveOptimizationIncremental (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testMultiDBTxn (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testOpenTxnEvent (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testTxnEventNonAcid (batchId=277) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testBootStrapDumpOfWarehouse (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testBootstrapFunctionReplication (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testBootstrapLoadRetryAfterFailureForAlterTable (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testBootstrapReplLoadRetryAfterFailureForFunctions (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testBootstrapReplLoadRetryAfterFailureForPartitions (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testBootstrapReplLoadRetryAfterFailureForTablesAndConstraints (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testCreateFunctionIncrementalReplication (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testCreateFunctionWithFunctionBinaryJarsOnHDFS (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testDropFunctionIncrementalReplication (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testIfBootstrapReplLoadFailWhenRetryAfterBootstrapComplete (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testIfCkptAndSourceOfReplPropsIgnoredByReplDump (batchId=268) org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testIfCkptPropIgnoredByExport
[jira] [Commented] (HIVE-21851) FireEventResponse should include event id when available
[ https://issues.apache.org/jira/browse/HIVE-21851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050508#comment-17050508 ] Vihang Karajgaonkar commented on HIVE-21851: I changed the requiredness of the field to default in the latest patch. Functionally, the patch is same as v4. > FireEventResponse should include event id when available > > > Key: HIVE-21851 > URL: https://issues.apache.org/jira/browse/HIVE-21851 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Minor > Attachments: HIVE-21851.01.patch, HIVE-21851.02.patch, > HIVE-21851.03.patch, HIVE-21851.04.patch, HIVE-21851.05.patch > > > The metastore API {{fire_listener_event}} gives clients the ability to fire a > INSERT event on DML operations. However, the returned response is empty > struct. It would be useful to sent back the event id information in the > response so that clients can take actions based of the event id. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-21851) FireEventResponse should include event id when available
[ https://issues.apache.org/jira/browse/HIVE-21851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-21851: --- Attachment: HIVE-21851.05.patch > FireEventResponse should include event id when available > > > Key: HIVE-21851 > URL: https://issues.apache.org/jira/browse/HIVE-21851 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Minor > Attachments: HIVE-21851.01.patch, HIVE-21851.02.patch, > HIVE-21851.03.patch, HIVE-21851.04.patch, HIVE-21851.05.patch > > > The metastore API {{fire_listener_event}} gives clients the ability to fire a > INSERT event on DML operations. However, the returned response is empty > struct. It would be useful to sent back the event id information in the > response so that clients can take actions based of the event id. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050502#comment-17050502 ] Hive QA commented on HIVE-22954: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 55s{color} | {color:blue} parser in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 59s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 3 new + 38 unchanged - 0 fixed = 41 total (was 38) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 1249 unchanged - 0 fixed = 1250 total (was 1249) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 5s{color} | {color:red} ql generated 3 new + 1531 unchanged - 0 fixed = 1534 total (was 1531) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 21s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Found reliance on default encoding in org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute():in org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(): String.getBytes() At ReplDumpTask.java:[line 126] | | | Found reliance on default encoding in org.apache.hadoop.hive.ql.parse.ReplicationSemanticAnalyzer.analyzeReplLoad(ASTNode):in org.apache.hadoop.hive.ql.parse.ReplicationSemanticAnalyzer.analyzeReplLoad(ASTNode): String.getBytes() At ReplicationSemanticAnalyzer.java:[line 350] | | | Suspicious comparison of Long references in org.apache.hadoop.hive.ql.parse.ReplicationSemanticAnalyzer.getCurrentLoadPath(Path) At ReplicationSemanticAnalyzer.java:in org.apache.hadoop.hive.ql.parse.ReplicationSemanticAnalyzer.getCurrentLoadPath(Path) At ReplicationSemanticAnalyzer.java:[line 426] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20934/dev-support/hive-personality.sh | | git revision | master / 9cdf97f | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-20934/yetus/diff-checkstyle-ql.txt | | checkstyle |
[jira] [Assigned] (HIVE-22786) Vectorization: Agg with distinct can be optimised in HASH mode
[ https://issues.apache.org/jira/browse/HIVE-22786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ramesh Kumar Thangarajan reassigned HIVE-22786: --- Assignee: Ramesh Kumar Thangarajan (was: Rajesh Balamohan) > Vectorization: Agg with distinct can be optimised in HASH mode > -- > > Key: HIVE-22786 > URL: https://issues.apache.org/jira/browse/HIVE-22786 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Rajesh Balamohan >Assignee: Ramesh Kumar Thangarajan >Priority: Minor > Attachments: HIVE-22786.1.patch, HIVE-22786.10.patch, > HIVE-22786.2.patch, HIVE-22786.3.patch, HIVE-22786.4.wip.patch, > HIVE-22786.5.patch, HIVE-22786.6.patch, HIVE-22786.7.patch, > HIVE-22786.8.patch, HIVE-22786.9.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22786) Vectorization: Agg with distinct can be optimised in HASH mode
[ https://issues.apache.org/jira/browse/HIVE-22786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ramesh Kumar Thangarajan updated HIVE-22786: Status: Open (was: Patch Available) > Vectorization: Agg with distinct can be optimised in HASH mode > -- > > Key: HIVE-22786 > URL: https://issues.apache.org/jira/browse/HIVE-22786 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-22786.1.patch, HIVE-22786.10.patch, > HIVE-22786.2.patch, HIVE-22786.3.patch, HIVE-22786.4.wip.patch, > HIVE-22786.5.patch, HIVE-22786.6.patch, HIVE-22786.7.patch, > HIVE-22786.8.patch, HIVE-22786.9.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22786) Vectorization: Agg with distinct can be optimised in HASH mode
[ https://issues.apache.org/jira/browse/HIVE-22786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ramesh Kumar Thangarajan updated HIVE-22786: Attachment: HIVE-22786.10.patch Status: Patch Available (was: Open) > Vectorization: Agg with distinct can be optimised in HASH mode > -- > > Key: HIVE-22786 > URL: https://issues.apache.org/jira/browse/HIVE-22786 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Rajesh Balamohan >Assignee: Ramesh Kumar Thangarajan >Priority: Minor > Attachments: HIVE-22786.1.patch, HIVE-22786.10.patch, > HIVE-22786.2.patch, HIVE-22786.3.patch, HIVE-22786.4.wip.patch, > HIVE-22786.5.patch, HIVE-22786.6.patch, HIVE-22786.7.patch, > HIVE-22786.8.patch, HIVE-22786.9.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22929) Performance: quoted identifier parsing uses throwaway Regex via String.replaceAll()
[ https://issues.apache.org/jira/browse/HIVE-22929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050480#comment-17050480 ] Hive QA commented on HIVE-22929: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995468/HIVE-22929.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 18096 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/20933/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20933/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20933/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12995468 - PreCommit-HIVE-Build > Performance: quoted identifier parsing uses throwaway Regex via > String.replaceAll() > --- > > Key: HIVE-22929 > URL: https://issues.apache.org/jira/browse/HIVE-22929 > Project: Hive > Issue Type: Bug >Reporter: Gopal Vijayaraghavan >Assignee: Krisztian Kasa >Priority: Major > Attachments: HIVE-22929.1.patch, HIVE-22929.2.patch, > HIVE-22929.2.patch, HIVE-22929.2.patch, HIVE-22929.2.patch, > String.replaceAll.png > > > !String.replaceAll.png! > https://github.com/apache/hive/blob/master/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g#L530 > {code} > '`' ( '``' | ~('`') )* '`' { setText(getText().substring(1, > getText().length() -1 ).replaceAll("``", "`")); } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22966) LLAP: Consider including waitTime for comparing attempts in same vertex
[ https://issues.apache.org/jira/browse/HIVE-22966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050477#comment-17050477 ] Mustafa Iman commented on HIVE-22966: - If two tasks are in the same job and their priorities are the same, does it really matter which one gets executed first? > LLAP: Consider including waitTime for comparing attempts in same vertex > --- > > Key: HIVE-22966 > URL: https://issues.apache.org/jira/browse/HIVE-22966 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-22966.3.patch, HIVE-22966.4.patch > > > When attempts are compared within same vertex, it should pick up the attempt > with longest wait time to avoid starvation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query
[ https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-21778: --- Attachment: HIVE-21778.5.patch > CBO: "Struct is not null" gets evaluated as `nullable` always causing filter > miss in the query > -- > > Key: HIVE-21778 > URL: https://issues.apache.org/jira/browse/HIVE-21778 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 4.0.0, 2.3.5 >Reporter: Rajesh Balamohan >Assignee: Vineet Garg >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, > HIVE-21778.3.patch, HIVE-21778.4.patch, HIVE-21778.5.patch, test_null.q, > test_null.q.out > > Time Spent: 40m > Remaining Estimate: 0h > > {noformat} > drop table if exists test_struct; > CREATE external TABLE test_struct > ( > f1 string, > demo_struct struct, > datestr string > ); > set hive.cbo.enable=true; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note > that demo_struct filter is not added here > Filter Operator > predicate: (datestr = '2019-01-01') (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > set hive.cbo.enable=false; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) <- Note that demo_struct filter is added when CBO is > turned off > Filter Operator > predicate: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > {noformat} > In CalcitePlanner::genFilterRelNode, the following code misses to evaluate > this filter. > {noformat} > RexNode factoredFilterExpr = RexUtil > .pullFactors(cluster.getRexBuilder(), convertedFilterExpr); > {noformat} > Note that even if we add `demo_struct.f1` it would end up pushing the filter > correctly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query
[ https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-21778: --- Status: Patch Available (was: Open) > CBO: "Struct is not null" gets evaluated as `nullable` always causing filter > miss in the query > -- > > Key: HIVE-21778 > URL: https://issues.apache.org/jira/browse/HIVE-21778 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.3.5, 4.0.0 >Reporter: Rajesh Balamohan >Assignee: Vineet Garg >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, > HIVE-21778.3.patch, HIVE-21778.4.patch, HIVE-21778.5.patch, test_null.q, > test_null.q.out > > Time Spent: 40m > Remaining Estimate: 0h > > {noformat} > drop table if exists test_struct; > CREATE external TABLE test_struct > ( > f1 string, > demo_struct struct, > datestr string > ); > set hive.cbo.enable=true; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note > that demo_struct filter is not added here > Filter Operator > predicate: (datestr = '2019-01-01') (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > set hive.cbo.enable=false; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) <- Note that demo_struct filter is added when CBO is > turned off > Filter Operator > predicate: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > {noformat} > In CalcitePlanner::genFilterRelNode, the following code misses to evaluate > this filter. > {noformat} > RexNode factoredFilterExpr = RexUtil > .pullFactors(cluster.getRexBuilder(), convertedFilterExpr); > {noformat} > Note that even if we add `demo_struct.f1` it would end up pushing the filter > correctly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query
[ https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-21778: --- Status: Open (was: Patch Available) > CBO: "Struct is not null" gets evaluated as `nullable` always causing filter > miss in the query > -- > > Key: HIVE-21778 > URL: https://issues.apache.org/jira/browse/HIVE-21778 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.3.5, 4.0.0 >Reporter: Rajesh Balamohan >Assignee: Vineet Garg >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, > HIVE-21778.3.patch, HIVE-21778.4.patch, HIVE-21778.5.patch, test_null.q, > test_null.q.out > > Time Spent: 40m > Remaining Estimate: 0h > > {noformat} > drop table if exists test_struct; > CREATE external TABLE test_struct > ( > f1 string, > demo_struct struct, > datestr string > ); > set hive.cbo.enable=true; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note > that demo_struct filter is not added here > Filter Operator > predicate: (datestr = '2019-01-01') (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > set hive.cbo.enable=false; > explain select * from etltmp.test_struct where datestr='2019-01-01' and > demo_struct is not null; > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: -1 > Processor Tree: > TableScan > alias: test_struct > filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) <- Note that demo_struct filter is added when CBO is > turned off > Filter Operator > predicate: ((datestr = '2019-01-01') and demo_struct is not null) > (type: boolean) > Select Operator > expressions: f1 (type: string), demo_struct (type: > struct), '2019-01-01' (type: string) > outputColumnNames: _col0, _col1, _col2 > ListSink > {noformat} > In CalcitePlanner::genFilterRelNode, the following code misses to evaluate > this filter. > {noformat} > RexNode factoredFilterExpr = RexUtil > .pullFactors(cluster.getRexBuilder(), convertedFilterExpr); > {noformat} > Note that even if we add `demo_struct.f1` it would end up pushing the filter > correctly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22929) Performance: quoted identifier parsing uses throwaway Regex via String.replaceAll()
[ https://issues.apache.org/jira/browse/HIVE-22929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050414#comment-17050414 ] Hive QA commented on HIVE-22929: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 3s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20933/dev-support/hive-personality.sh | | git revision | master / 9cdf97f | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20933/yetus/patch-asflicense-problems.txt | | modules | C: parser U: parser | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20933/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Performance: quoted identifier parsing uses throwaway Regex via > String.replaceAll() > --- > > Key: HIVE-22929 > URL: https://issues.apache.org/jira/browse/HIVE-22929 > Project: Hive > Issue Type: Bug >Reporter: Gopal Vijayaraghavan >Assignee: Krisztian Kasa >Priority: Major > Attachments: HIVE-22929.1.patch, HIVE-22929.2.patch, > HIVE-22929.2.patch, HIVE-22929.2.patch, HIVE-22929.2.patch, > String.replaceAll.png > > > !String.replaceAll.png! > https://github.com/apache/hive/blob/master/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g#L530 > {code} > '`' ( '``' | ~('`') )* '`' { setText(getText().substring(1, > getText().length() -1 ).replaceAll("``", "`")); } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-22957) Support For Filter Expression In MSCK Command
[ https://issues.apache.org/jira/browse/HIVE-22957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17049245#comment-17049245 ] Syed Shameerur Rahman edited comment on HIVE-22957 at 3/3/20 5:27 PM: -- [~prasanth_j] [~rbalamohan] [~jcamachorodriguez] Any Concerns / Suggestions regarding the approach? was (Author: srahman): [~prasanth_j] [~rbalamohan] Any Concerns / Suggestions regarding the approach? > Support For Filter Expression In MSCK Command > - > > Key: HIVE-22957 > URL: https://issues.apache.org/jira/browse/HIVE-22957 > Project: Hive > Issue Type: Improvement >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Fix For: 4.0.0 > > > Currently MSCK command supports full repair of table (all partitions) or some > subset of partitions based on partitionSpec. The aim of this jira is to > introduce a filterExp (=, !=, <, >, >=, <=, LIKE) in MSCK command so that a > larger subset of partitions can be recovered (added/deleted) without firing a > full repair might take time if the no. of partitions are huge. > *Approach*: > The initial approach is to add a where clause in MSCK command Eg: MCK REPAIR > TABLE ADD|DROP|SYNC PARTITIONS WHERE > AND > *Flow:* > 1) Parse the where clause and generate filterExpression > 2) fetch all the partitions from the metastore which matches the filter > expression > 3) fetch all the partition file from the filesystem > 4) remove all the partition path which does not match with the filter > expression > 5) Based on ADD | DROP | SYNC do the remaining steps. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22901) Variable substitution can lead to OOM on circular references
[ https://issues.apache.org/jira/browse/HIVE-22901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050408#comment-17050408 ] Hive QA commented on HIVE-22901: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995467/HIVE-22901.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18096 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestRestrictedList.testRestrictedList (batchId=293) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/20932/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20932/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20932/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12995467 - PreCommit-HIVE-Build > Variable substitution can lead to OOM on circular references > > > Key: HIVE-22901 > URL: https://issues.apache.org/jira/browse/HIVE-22901 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-22901.1.patch > > > {{SystemVariables#substitute()}} is dealing with circular references between > variables by only doing the substitution 40 times by default. If the > substituted part is sufficiently large though, it's possible that the > substitution will produce a string bigger than the heap size within the 40 > executions. > Take the following test case that fails with OOM in current master (third > round of execution would need 10G heap, while running with only 2G): > {code} > @Test > public void testSubstitute() { > String randomPart = RandomStringUtils.random(100_000); > String reference = "${hiveconf:myTestVariable}"; > StringBuilder longStringWithReferences = new StringBuilder(); > for(int i = 0; i < 10; i ++) { > longStringWithReferences.append(randomPart).append(reference); > } > SystemVariables uut = new SystemVariables(); > HiveConf conf = new HiveConf(); > conf.set("myTestVariable", longStringWithReferences.toString()); > uut.substitute(conf, longStringWithReferences.toString(), 40); > } > {code} > Produces: > {code} > java.lang.OutOfMemoryError: Java heap space > at java.util.Arrays.copyOf(Arrays.java:3332) > at > java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) > at > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) > at java.lang.StringBuilder.append(StringBuilder.java:136) > at > org.apache.hadoop.hive.conf.SystemVariables.substitute(SystemVariables.java:110) > at > org.apache.hadoop.hive.conf.SystemVariablesTest.testSubstitute(SystemVariablesTest.java:27) > {code} > We should check the size of the substituted query and bail out earlier. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer
[ https://issues.apache.org/jira/browse/HIVE-21218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050371#comment-17050371 ] David McGinnis commented on HIVE-21218: --- Looks like this patch has been abandoned. I'm going to take over the Jira and shepherd it into the repo. > KafkaSerDe doesn't support topics created via Confluent Avro serializer > --- > > Key: HIVE-21218 > URL: https://issues.apache.org/jira/browse/HIVE-21218 > Project: Hive > Issue Type: Bug > Components: kafka integration, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Milan Baran >Assignee: Milan Baran >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21218.2.patch, HIVE-21218.patch > > Time Spent: 4h 40m > Remaining Estimate: 0h > > According to [Google > groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A] > the Confluent avro serialzier uses propertiary format for kafka value - > <4 bytes of schema ID> conforms to schema>. > This format does not cause any problem for Confluent kafka deserializer which > respect the format however for hive kafka handler its bit a problem to > correctly deserialize kafka value, because Hive uses custom deserializer from > bytes to objects and ignores kafka consumer ser/deser classes provided via > table property. > It would be nice to support Confluent format with magic byte. > Also it would be great to support Schema registry as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer
[ https://issues.apache.org/jira/browse/HIVE-21218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David McGinnis reassigned HIVE-21218: - Assignee: David McGinnis (was: Milan Baran) > KafkaSerDe doesn't support topics created via Confluent Avro serializer > --- > > Key: HIVE-21218 > URL: https://issues.apache.org/jira/browse/HIVE-21218 > Project: Hive > Issue Type: Bug > Components: kafka integration, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Milan Baran >Assignee: David McGinnis >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21218.2.patch, HIVE-21218.patch > > Time Spent: 4h 40m > Remaining Estimate: 0h > > According to [Google > groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A] > the Confluent avro serialzier uses propertiary format for kafka value - > <4 bytes of schema ID> conforms to schema>. > This format does not cause any problem for Confluent kafka deserializer which > respect the format however for hive kafka handler its bit a problem to > correctly deserialize kafka value, because Hive uses custom deserializer from > bytes to objects and ignores kafka consumer ser/deser classes provided via > table property. > It would be nice to support Confluent format with magic byte. > Also it would be great to support Schema registry as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22971) Eliminate file rename in insert-only compactor
[ https://issues.apache.org/jira/browse/HIVE-22971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karen Coppage updated HIVE-22971: - Labels: ACID compaction (was: ACID) > Eliminate file rename in insert-only compactor > -- > > Key: HIVE-22971 > URL: https://issues.apache.org/jira/browse/HIVE-22971 > Project: Hive > Issue Type: Improvement >Reporter: Karen Coppage >Priority: Major > Labels: ACID, compaction > > File rename is expensive for object stores, so MM (insert-only) compaction > should skip that step when committing and write directly to base_x_cZ or > delta_x_y_cZ. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22971) Eliminate file rename in insert-only compactor
[ https://issues.apache.org/jira/browse/HIVE-22971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karen Coppage updated HIVE-22971: - Labels: ACID (was: ) > Eliminate file rename in insert-only compactor > -- > > Key: HIVE-22971 > URL: https://issues.apache.org/jira/browse/HIVE-22971 > Project: Hive > Issue Type: Improvement >Reporter: Karen Coppage >Priority: Major > Labels: ACID > > File rename is expensive for object stores, so MM (insert-only) compaction > should skip that step when committing and write directly to base_x_cZ or > delta_x_y_cZ. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22901) Variable substitution can lead to OOM on circular references
[ https://issues.apache.org/jira/browse/HIVE-22901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050361#comment-17050361 ] Hive QA commented on HIVE-22901: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} common in master has 63 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} common: The patch generated 37 new + 376 unchanged - 0 fixed = 413 total (was 376) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20932/dev-support/hive-personality.sh | | git revision | master / 9cdf97f | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-20932/yetus/diff-checkstyle-common.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20932/yetus/patch-asflicense-problems.txt | | modules | C: common U: common | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20932/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Variable substitution can lead to OOM on circular references > > > Key: HIVE-22901 > URL: https://issues.apache.org/jira/browse/HIVE-22901 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-22901.1.patch > > > {{SystemVariables#substitute()}} is dealing with circular references between > variables by only doing the substitution 40 times by default. If the > substituted part is sufficiently large though, it's possible that the > substitution will produce a string bigger than the heap size within the 40 > executions. > Take the following test case that fails with OOM in current master (third > round of execution would need 10G heap, while running with only 2G): > {code} > @Test > public void testSubstitute() { > String randomPart = RandomStringUtils.random(100_000); > String reference = "${hiveconf:myTestVariable}"; > StringBuilder longStringWithReferences = new StringBuilder(); > for(int i = 0; i < 10; i ++) { > longStringWithReferences.append(randomPart).append(reference); > } > SystemVariables uut = new SystemVariables(); > HiveConf conf = new HiveConf(); >
[jira] [Commented] (HIVE-22762) Leap day is incorrectly parsed during cast in Hive
[ https://issues.apache.org/jira/browse/HIVE-22762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050360#comment-17050360 ] David Mollitor commented on HIVE-22762: --- {code:java} // add @throws JavaDoc private Timestamp getTimestampFromValues(List temporalValues) { if (temporalTokens.size() != temporalValues.size()) { // use Guava Preconditions.checkState(boolean) } // Use parameters for ImmutablePair here (and removed future casts) List tokensList = new ArrayList<>(); // Instead of sorting then reversing, just sort in reverse order :) // https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html#reverseOrder-java.util.Comparator- tokensList.sort((Comparator) (o1, o2) -> { Token token1 = ((ImmutablePair) o1).left; Token token2 = ((ImmutablePair) o2).left; return token1.temporalField.getBaseUnit().getDuration() .compareTo(token2.temporalField.getBaseUnit().getDuration()); }); Collections.reverse(tokensList); } {code} Rather than adding a new token list that captures all the temporal tokens, I would rather see that the method accept a list of tokens and a list of values. The tokens can be filtered and sorted in the method. This way there is only ever a single list to keep track of and users (methods) can filter however they want on the fly. {code:java} List tokens = temporalTokens.stream() .filter(token-> isNumericTemporalToken(token.type) || isCharacterTemporalToken(token.type)) .collect(Collectors.toList()); {code} > Leap day is incorrectly parsed during cast in Hive > -- > > Key: HIVE-22762 > URL: https://issues.apache.org/jira/browse/HIVE-22762 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-22762.01.patch, HIVE-22762.01.patch, > HIVE-22762.01.patch, HIVE-22762.01.patch, HIVE-22762.02.patch, > HIVE-22762.03.patch, HIVE-22762.03.patch > > > While casting a string to a date with a custom date format having day token > before year and moth tokens, the date is parsed incorrectly for leap days. > h3. How to reproduce > Execute {code}select cast("29 02 0" as date format "dd mm rr"){code} with > Hive. The query results in *2020-02-28*, incorrectly. > > Executing the another cast with a slightly modified representation of the > date (day is preceded by year and moth) is however correctly parsed: > {code}select cast("0 02 29" as date format "rr mm dd"){code} > It returns *2020-02-29*. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22785) Update/delete/merge statements not optimized through CBO
[ https://issues.apache.org/jira/browse/HIVE-22785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050333#comment-17050333 ] Hive QA commented on HIVE-22785: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12995466/HIVE-22785.2.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 148 failed/errored test(s), 18096 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[map_join_on_filter] (batchId=309) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_subquery] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_view_delete] (batchId=38) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_view_disable_cbo_1] (batchId=82) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join0] (batchId=102) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join15] (batchId=18) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join20] (batchId=103) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join21] (batchId=94) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join23] (batchId=21) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join28] (batchId=82) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join29] (batchId=63) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join31] (batchId=51) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_auto_join0] (batchId=18) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer14] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_transactional_full_acid] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas] (batchId=7) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas_char] (batchId=21) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas_date] (batchId=1) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas_varchar] (batchId=19) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_all_non_partitioned] (batchId=33) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_orig_table] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_whole_partition] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_optimization_acid2] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[escape_sortby1] (batchId=53) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explain_locks] (batchId=50) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[identity_project_remove_skip] (batchId=57) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input4_limit] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input_part7] (batchId=19) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_update_delete] (batchId=99) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join0] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join15] (batchId=94) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join20] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join21] (batchId=46) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join23] (batchId=50) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join40] (batchId=61) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_filter_on_outerjoin] (batchId=71) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_test_outer] (batchId=1) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_acid_no_masking] (batchId=27) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[no_hooks] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parallel_join0] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcs] (batchId=57) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join4] (batchId=66) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[print_header] (batchId=8) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_ppr] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[update_all_non_partitioned] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[update_two_cols] (batchId=24) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_delete_orig_table] (batchId=2) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets] (batchId=187) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=190) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_join0] (batchId=193) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_join21] (batchId=191)
[jira] [Updated] (HIVE-22126) hive-exec packaging should shade guava
[ https://issues.apache.org/jira/browse/HIVE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Chung updated HIVE-22126: Attachment: HIVE-22126.05.patch Status: Patch Available (was: Open) The module calcite-linq4j is included in hive-exec. > hive-exec packaging should shade guava > -- > > Key: HIVE-22126 > URL: https://issues.apache.org/jira/browse/HIVE-22126 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Eugene Chung >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22126.01.patch, HIVE-22126.02.patch, > HIVE-22126.03.patch, HIVE-22126.04.patch, HIVE-22126.05.patch > > > The ql/pom.xml includes complete guava library into hive-exec.jar > https://github.com/apache/hive/blob/master/ql/pom.xml#L990 This causes a > problems for downstream clients of hive which have hive-exec.jar in their > classpath since they are pinned to the same guava version as that of hive. > We should shade guava classes so that other components which depend on > hive-exec can independently use a different version of guava as needed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22126) hive-exec packaging should shade guava
[ https://issues.apache.org/jira/browse/HIVE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Chung updated HIVE-22126: Status: Open (was: Patch Available) > hive-exec packaging should shade guava > -- > > Key: HIVE-22126 > URL: https://issues.apache.org/jira/browse/HIVE-22126 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Eugene Chung >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22126.01.patch, HIVE-22126.02.patch, > HIVE-22126.03.patch, HIVE-22126.04.patch > > > The ql/pom.xml includes complete guava library into hive-exec.jar > https://github.com/apache/hive/blob/master/ql/pom.xml#L990 This causes a > problems for downstream clients of hive which have hive-exec.jar in their > classpath since they are pinned to the same guava version as that of hive. > We should shade guava classes so that other components which depend on > hive-exec can independently use a different version of guava as needed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22762) Leap day is incorrectly parsed during cast in Hive
[ https://issues.apache.org/jira/browse/HIVE-22762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050304#comment-17050304 ] Karen Coppage commented on HIVE-22762: -- Yep, I use the {{--no-prefix}} flag like the Hive wiki directs. But I hear that's not necessary anymore > Leap day is incorrectly parsed during cast in Hive > -- > > Key: HIVE-22762 > URL: https://issues.apache.org/jira/browse/HIVE-22762 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-22762.01.patch, HIVE-22762.01.patch, > HIVE-22762.01.patch, HIVE-22762.01.patch, HIVE-22762.02.patch, > HIVE-22762.03.patch, HIVE-22762.03.patch > > > While casting a string to a date with a custom date format having day token > before year and moth tokens, the date is parsed incorrectly for leap days. > h3. How to reproduce > Execute {code}select cast("29 02 0" as date format "dd mm rr"){code} with > Hive. The query results in *2020-02-28*, incorrectly. > > Executing the another cast with a slightly modified representation of the > date (day is preceded by year and moth) is however correctly parsed: > {code}select cast("0 02 29" as date format "rr mm dd"){code} > It returns *2020-02-29*. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22970) Add a qoption to enable tests to use transactional mode
[ https://issues.apache.org/jira/browse/HIVE-22970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reassigned HIVE-22970: --- > Add a qoption to enable tests to use transactional mode > --- > > Key: HIVE-22970 > URL: https://issues.apache.org/jira/browse/HIVE-22970 > Project: Hive > Issue Type: Bug > Components: Tests >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > > in scheduled query tests; the executor is launched form a qoption - however > scheduled queries do make a snapshot of the actual hiveconf and as such there > is no way to alter hiveconf keys for scheduled executions in the tests. > moving the "usual" transactional enabler settings to a qoption may also help > clean up our tests a bit -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22969) Union remove optimisation results incorrect data when inserting to ACID table
[ https://issues.apache.org/jira/browse/HIVE-22969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marta Kuczora reassigned HIVE-22969: > Union remove optimisation results incorrect data when inserting to ACID table > - > > Key: HIVE-22969 > URL: https://issues.apache.org/jira/browse/HIVE-22969 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > > Steps to reproduce the issue: > {noformat} > create table input_text(key string, val string) stored as textfile location > '/Users/martakuczora/work/hive/warehouse/external/input_text'; > create table output_acid(key string, val string) stored as orc > tblproperties('transactional'='true'); > insert into input_text values ('1','1'), ('2','2'),('3','3'); > {noformat} > {noformat} > set hive.mapred.mode=nonstrict; > set hive.stats.autogather=false; > set hive.optimize.union.remove=true; > set hive.auto.convert.join=true; > set hive.exec.submitviachild=false; > set hive.exec.submit.local.task.via.child=false; > SELECT * FROM ( > select key, val from input_text > union all > select a.key as key, b.val as val FROM input_text a join input_text b on > a.key=b.key) c; > The result of the select: > 1 1 > 2 2 > 3 3 > 1 1 > 2 2 > 3 3 > {noformat} > {noformat} > insert into table output_acid > SELECT * FROM ( > select key, val from input_text > union all > select a.key as key, b.val as val FROM input_text a join input_text b on > a.key=b.key) c; > select * from output_acid; > The result: > 1 1 > 2 2 > 3 3 > {noformat} > The folder of the output_acid table contained the following delta directories: > {noformat} > drwxr-xr-x 6 martakuczora staff 192 Mar 2 16:29 delta_000_000 > drwxr-xr-x 6 martakuczora staff 192 Mar 2 16:29 delta_001_001_0001 > {noformat} > It can be seen that the statement ID from the first directory is missing and > when the select statements runs on the table, this directory will be ignored. > That's why only half of the data got returned when running the select on the > output_acid table. > If either hive.stats.autogather is set to true or hive.optimize.union.remove > is set to false the result of the insert will be correct. In this case there > will be only 1 delta directory in the table's folder. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22968) Set hive.parquet.timestamp.time.unit default to micros
[ https://issues.apache.org/jira/browse/HIVE-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050295#comment-17050295 ] Marta Kuczora commented on HIVE-22968: -- +1 pending tests Thanks [~klcopp] for the patch. > Set hive.parquet.timestamp.time.unit default to micros > -- > > Key: HIVE-22968 > URL: https://issues.apache.org/jira/browse/HIVE-22968 > Project: Hive > Issue Type: Task >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Major > Attachments: HIVE-22968.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22968) Set hive.parquet.timestamp.time.unit default to micros
[ https://issues.apache.org/jira/browse/HIVE-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karen Coppage updated HIVE-22968: - Attachment: HIVE-22968.patch > Set hive.parquet.timestamp.time.unit default to micros > -- > > Key: HIVE-22968 > URL: https://issues.apache.org/jira/browse/HIVE-22968 > Project: Hive > Issue Type: Task >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Major > Attachments: HIVE-22968.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22899) Make sure qtests clean up copied files from test directories
[ https://issues.apache.org/jira/browse/HIVE-22899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050290#comment-17050290 ] Peter Vary commented on HIVE-22899: --- +1 > Make sure qtests clean up copied files from test directories > > > Key: HIVE-22899 > URL: https://issues.apache.org/jira/browse/HIVE-22899 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Chovan >Assignee: Zoltan Chovan >Priority: Minor > Attachments: HIVE-22899.2.patch, HIVE-22899.3.patch, > HIVE-22899.4.patch, HIVE-22899.5.patch, HIVE-22899.6.patch, > HIVE-22899.7.patch, HIVE-22899.8.patch, HIVE-22899.patch > > > Several qtest files are copying schema or test files to the test directories > (such as ${system:test.tmp.dir} and > ${hiveconf:hive.metastore.warehouse.dir}), many times without changing the > name of the copied file. When the same files is copied by another qtest to > the same directory the copy and hence the test fails. This can lead to flaky > tests when any two of these qtests gets scheduled to the same batch. > > In order to avoid these failures, we should make sure the files copied to the > test dirs have unique names and we should make sure these files are cleaned > up by the same qtest files that copies the file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22968) Set hive.parquet.timestamp.time.unit default to micros
[ https://issues.apache.org/jira/browse/HIVE-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karen Coppage reassigned HIVE-22968: > Set hive.parquet.timestamp.time.unit default to micros > -- > > Key: HIVE-22968 > URL: https://issues.apache.org/jira/browse/HIVE-22968 > Project: Hive > Issue Type: Task >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22785) Update/delete/merge statements not optimized through CBO
[ https://issues.apache.org/jira/browse/HIVE-22785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050285#comment-17050285 ] Hive QA commented on HIVE-22785: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 53s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 15 new + 154 unchanged - 4 fixed = 169 total (was 158) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 16s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20931/dev-support/hive-personality.sh | | git revision | master / 9cdf97f | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-20931/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-20931/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20931/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Update/delete/merge statements not optimized through CBO > > > Key: HIVE-22785 > URL: https://issues.apache.org/jira/browse/HIVE-22785 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Krisztian Kasa >Priority: Critical > Attachments: HIVE-22785.1.patch, HIVE-22785.2.patch, > HIVE-22785.2.patch > > > Currently, CBO is bypassed for update/delete/merge statements. > To support optimizing these statements through CBO, we need to complete three > main tasks: 1) support for sort in Calcite planner, 2) support for SORT in > AST converter, and 3) {{RewriteSemanticAnalyzer}} should extend > {{CalcitePlanner}} instead of {{SemanticAnalyzer}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22967) Support hive.reloadable.aux.jars.path for Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-22967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiko Uchida updated HIVE-22967: Attachment: HIVE-22967.1.patch Status: Patch Available (was: In Progress) > Support hive.reloadable.aux.jars.path for Hive on Tez > - > > Key: HIVE-22967 > URL: https://issues.apache.org/jira/browse/HIVE-22967 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.6, 3.1.2 >Reporter: Toshihiko Uchida >Assignee: Toshihiko Uchida >Priority: Minor > Attachments: HIVE-22967.1.patch > > > The jars in hive.reloadable.aux.jars.path are not localized in Tez containers. > As a result, any query utilizing those reloadable jars fails for Hive on Tez > due to ClassNotFoundException. > {code} > Error: Error while processing statement: FAILED: Execution Error, return code > 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, > vertexName=Map 1, vertexId=vertex_1578856704640_0087_1_00, diagnostics=[Task > failed, taskId=task_1578856704640_0087_1_00_01, diagnostics=[TaskAttempt > 0 failed, info=[Error: Error while running task ( failure) : > attempt_1578856704640_0087_1_00_01_0:java.lang.RuntimeException: > java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:354) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.RuntimeException: java.lang.ClassNotFoundException: > com.example.hive.udf.Lower > at > org.apache.hadoop.hive.ql.exec.FilterOperator.initializeOp(FilterOperator.java:71) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.initializeOp(VectorFilterOperator.java:83) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:573) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:525) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:386) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.initializeMapOperator(VectorMapOperator.java:591) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:317) > ... 17 more > Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: > com.example.hive.udf.Lower > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.getUdfClass(GenericUDFBridge.java:134) > at > org.apache.hadoop.hive.ql.exec.FunctionRegistry.isStateful(FunctionRegistry.java:1492) > at > org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.(ExprNodeGenericFuncEvaluator.java:111) > at > org.apache.hadoop.hive.ql.exec.ExprNodeEvaluatorFactory.get(ExprNodeEvaluatorFactory.java:58) > at >
[jira] [Commented] (HIVE-22901) Variable substitution can lead to OOM on circular references
[ https://issues.apache.org/jira/browse/HIVE-22901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17050270#comment-17050270 ] Zoltan Haindrich commented on HIVE-22901: - +1 pending tests > Variable substitution can lead to OOM on circular references > > > Key: HIVE-22901 > URL: https://issues.apache.org/jira/browse/HIVE-22901 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-22901.1.patch > > > {{SystemVariables#substitute()}} is dealing with circular references between > variables by only doing the substitution 40 times by default. If the > substituted part is sufficiently large though, it's possible that the > substitution will produce a string bigger than the heap size within the 40 > executions. > Take the following test case that fails with OOM in current master (third > round of execution would need 10G heap, while running with only 2G): > {code} > @Test > public void testSubstitute() { > String randomPart = RandomStringUtils.random(100_000); > String reference = "${hiveconf:myTestVariable}"; > StringBuilder longStringWithReferences = new StringBuilder(); > for(int i = 0; i < 10; i ++) { > longStringWithReferences.append(randomPart).append(reference); > } > SystemVariables uut = new SystemVariables(); > HiveConf conf = new HiveConf(); > conf.set("myTestVariable", longStringWithReferences.toString()); > uut.substitute(conf, longStringWithReferences.toString(), 40); > } > {code} > Produces: > {code} > java.lang.OutOfMemoryError: Java heap space > at java.util.Arrays.copyOf(Arrays.java:3332) > at > java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) > at > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) > at java.lang.StringBuilder.append(StringBuilder.java:136) > at > org.apache.hadoop.hive.conf.SystemVariables.substitute(SystemVariables.java:110) > at > org.apache.hadoop.hive.conf.SystemVariablesTest.testSubstitute(SystemVariablesTest.java:27) > {code} > We should check the size of the substituted query and bail out earlier. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Description: [https://github.com/apache/hive/pull/932] > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.patch > > > [https://github.com/apache/hive/pull/932] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22967) Support hive.reloadable.aux.jars.path for Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-22967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Toshihiko Uchida reassigned HIVE-22967: --- > Support hive.reloadable.aux.jars.path for Hive on Tez > - > > Key: HIVE-22967 > URL: https://issues.apache.org/jira/browse/HIVE-22967 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.6, 3.1.2 >Reporter: Toshihiko Uchida >Assignee: Toshihiko Uchida >Priority: Minor > > The jars in hive.reloadable.aux.jars.path are not localized in Tez containers. > As a result, any query utilizing those reloadable jars fails for Hive on Tez > due to ClassNotFoundException. > {code} > Error: Error while processing statement: FAILED: Execution Error, return code > 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, > vertexName=Map 1, vertexId=vertex_1578856704640_0087_1_00, diagnostics=[Task > failed, taskId=task_1578856704640_0087_1_00_01, diagnostics=[TaskAttempt > 0 failed, info=[Error: Error while running task ( failure) : > attempt_1578856704640_0087_1_00_01_0:java.lang.RuntimeException: > java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:354) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.RuntimeException: java.lang.ClassNotFoundException: > com.example.hive.udf.Lower > at > org.apache.hadoop.hive.ql.exec.FilterOperator.initializeOp(FilterOperator.java:71) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.initializeOp(VectorFilterOperator.java:83) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:573) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:525) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:386) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.initializeMapOperator(VectorMapOperator.java:591) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:317) > ... 17 more > Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: > com.example.hive.udf.Lower > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.getUdfClass(GenericUDFBridge.java:134) > at > org.apache.hadoop.hive.ql.exec.FunctionRegistry.isStateful(FunctionRegistry.java:1492) > at > org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.(ExprNodeGenericFuncEvaluator.java:111) > at > org.apache.hadoop.hive.ql.exec.ExprNodeEvaluatorFactory.get(ExprNodeEvaluatorFactory.java:58) > at > org.apache.hadoop.hive.ql.exec.FilterOperator.initializeOp(FilterOperator.java:63) > ... 24 more > Caused by:
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Issue Type: Task (was: Bug) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Task >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HIVE-22967) Support hive.reloadable.aux.jars.path for Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-22967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-22967 started by Toshihiko Uchida. --- > Support hive.reloadable.aux.jars.path for Hive on Tez > - > > Key: HIVE-22967 > URL: https://issues.apache.org/jira/browse/HIVE-22967 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.2, 2.3.6 >Reporter: Toshihiko Uchida >Assignee: Toshihiko Uchida >Priority: Minor > > The jars in hive.reloadable.aux.jars.path are not localized in Tez containers. > As a result, any query utilizing those reloadable jars fails for Hive on Tez > due to ClassNotFoundException. > {code} > Error: Error while processing statement: FAILED: Execution Error, return code > 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, > vertexName=Map 1, vertexId=vertex_1578856704640_0087_1_00, diagnostics=[Task > failed, taskId=task_1578856704640_0087_1_00_01, diagnostics=[TaskAttempt > 0 failed, info=[Error: Error while running task ( failure) : > attempt_1578856704640_0087_1_00_01_0:java.lang.RuntimeException: > java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: Map operator initialization failed > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:354) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.RuntimeException: java.lang.ClassNotFoundException: > com.example.hive.udf.Lower > at > org.apache.hadoop.hive.ql.exec.FilterOperator.initializeOp(FilterOperator.java:71) > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.initializeOp(VectorFilterOperator.java:83) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:573) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:525) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:386) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.initializeMapOperator(VectorMapOperator.java:591) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:317) > ... 17 more > Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: > com.example.hive.udf.Lower > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.getUdfClass(GenericUDFBridge.java:134) > at > org.apache.hadoop.hive.ql.exec.FunctionRegistry.isStateful(FunctionRegistry.java:1492) > at > org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.(ExprNodeGenericFuncEvaluator.java:111) > at > org.apache.hadoop.hive.ql.exec.ExprNodeEvaluatorFactory.get(ExprNodeEvaluatorFactory.java:58) > at > org.apache.hadoop.hive.ql.exec.FilterOperator.initializeOp(FilterOperator.java:63) > ... 24 more > Caused by:
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Labels: pull-request-available (was: ) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Bug >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22954.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aasha Medhi updated HIVE-22954: --- Attachment: HIVE-22954.patch Status: Patch Available (was: In Progress) > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Bug >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > Attachments: HIVE-22954.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HIVE-22954) Schedule Repl Load using Hive Scheduler
[ https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-22954 started by Aasha Medhi. -- > Schedule Repl Load using Hive Scheduler > --- > > Key: HIVE-22954 > URL: https://issues.apache.org/jira/browse/HIVE-22954 > Project: Hive > Issue Type: Bug >Reporter: Aasha Medhi >Assignee: Aasha Medhi >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22929) Performance: quoted identifier parsing uses throwaway Regex via String.replaceAll()
[ https://issues.apache.org/jira/browse/HIVE-22929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-22929: -- Status: Patch Available (was: Open) > Performance: quoted identifier parsing uses throwaway Regex via > String.replaceAll() > --- > > Key: HIVE-22929 > URL: https://issues.apache.org/jira/browse/HIVE-22929 > Project: Hive > Issue Type: Bug >Reporter: Gopal Vijayaraghavan >Assignee: Krisztian Kasa >Priority: Major > Attachments: HIVE-22929.1.patch, HIVE-22929.2.patch, > HIVE-22929.2.patch, HIVE-22929.2.patch, HIVE-22929.2.patch, > String.replaceAll.png > > > !String.replaceAll.png! > https://github.com/apache/hive/blob/master/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g#L530 > {code} > '`' ( '``' | ~('`') )* '`' { setText(getText().substring(1, > getText().length() -1 ).replaceAll("``", "`")); } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22929) Performance: quoted identifier parsing uses throwaway Regex via String.replaceAll()
[ https://issues.apache.org/jira/browse/HIVE-22929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-22929: -- Attachment: HIVE-22929.2.patch > Performance: quoted identifier parsing uses throwaway Regex via > String.replaceAll() > --- > > Key: HIVE-22929 > URL: https://issues.apache.org/jira/browse/HIVE-22929 > Project: Hive > Issue Type: Bug >Reporter: Gopal Vijayaraghavan >Assignee: Krisztian Kasa >Priority: Major > Attachments: HIVE-22929.1.patch, HIVE-22929.2.patch, > HIVE-22929.2.patch, HIVE-22929.2.patch, HIVE-22929.2.patch, > String.replaceAll.png > > > !String.replaceAll.png! > https://github.com/apache/hive/blob/master/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g#L530 > {code} > '`' ( '``' | ~('`') )* '`' { setText(getText().substring(1, > getText().length() -1 ).replaceAll("``", "`")); } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22929) Performance: quoted identifier parsing uses throwaway Regex via String.replaceAll()
[ https://issues.apache.org/jira/browse/HIVE-22929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-22929: -- Status: Open (was: Patch Available) > Performance: quoted identifier parsing uses throwaway Regex via > String.replaceAll() > --- > > Key: HIVE-22929 > URL: https://issues.apache.org/jira/browse/HIVE-22929 > Project: Hive > Issue Type: Bug >Reporter: Gopal Vijayaraghavan >Assignee: Krisztian Kasa >Priority: Major > Attachments: HIVE-22929.1.patch, HIVE-22929.2.patch, > HIVE-22929.2.patch, HIVE-22929.2.patch, HIVE-22929.2.patch, > String.replaceAll.png > > > !String.replaceAll.png! > https://github.com/apache/hive/blob/master/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g#L530 > {code} > '`' ( '``' | ~('`') )* '`' { setText(getText().substring(1, > getText().length() -1 ).replaceAll("``", "`")); } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22901) Variable substitution can lead to OOM on circular references
[ https://issues.apache.org/jira/browse/HIVE-22901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Voros updated HIVE-22901: Status: Patch Available (was: In Progress) Attached patch #1 that: - introduces new _restricted_ variable {{hive.query.max.length}} to limit the max length of queries with default of 10Mb. - enforces this limit during variable substitution > Variable substitution can lead to OOM on circular references > > > Key: HIVE-22901 > URL: https://issues.apache.org/jira/browse/HIVE-22901 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-22901.1.patch > > > {{SystemVariables#substitute()}} is dealing with circular references between > variables by only doing the substitution 40 times by default. If the > substituted part is sufficiently large though, it's possible that the > substitution will produce a string bigger than the heap size within the 40 > executions. > Take the following test case that fails with OOM in current master (third > round of execution would need 10G heap, while running with only 2G): > {code} > @Test > public void testSubstitute() { > String randomPart = RandomStringUtils.random(100_000); > String reference = "${hiveconf:myTestVariable}"; > StringBuilder longStringWithReferences = new StringBuilder(); > for(int i = 0; i < 10; i ++) { > longStringWithReferences.append(randomPart).append(reference); > } > SystemVariables uut = new SystemVariables(); > HiveConf conf = new HiveConf(); > conf.set("myTestVariable", longStringWithReferences.toString()); > uut.substitute(conf, longStringWithReferences.toString(), 40); > } > {code} > Produces: > {code} > java.lang.OutOfMemoryError: Java heap space > at java.util.Arrays.copyOf(Arrays.java:3332) > at > java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) > at > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) > at java.lang.StringBuilder.append(StringBuilder.java:136) > at > org.apache.hadoop.hive.conf.SystemVariables.substitute(SystemVariables.java:110) > at > org.apache.hadoop.hive.conf.SystemVariablesTest.testSubstitute(SystemVariablesTest.java:27) > {code} > We should check the size of the substituted query and bail out earlier. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22901) Variable substitution can lead to OOM on circular references
[ https://issues.apache.org/jira/browse/HIVE-22901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Voros updated HIVE-22901: Attachment: HIVE-22901.1.patch > Variable substitution can lead to OOM on circular references > > > Key: HIVE-22901 > URL: https://issues.apache.org/jira/browse/HIVE-22901 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-22901.1.patch > > > {{SystemVariables#substitute()}} is dealing with circular references between > variables by only doing the substitution 40 times by default. If the > substituted part is sufficiently large though, it's possible that the > substitution will produce a string bigger than the heap size within the 40 > executions. > Take the following test case that fails with OOM in current master (third > round of execution would need 10G heap, while running with only 2G): > {code} > @Test > public void testSubstitute() { > String randomPart = RandomStringUtils.random(100_000); > String reference = "${hiveconf:myTestVariable}"; > StringBuilder longStringWithReferences = new StringBuilder(); > for(int i = 0; i < 10; i ++) { > longStringWithReferences.append(randomPart).append(reference); > } > SystemVariables uut = new SystemVariables(); > HiveConf conf = new HiveConf(); > conf.set("myTestVariable", longStringWithReferences.toString()); > uut.substitute(conf, longStringWithReferences.toString(), 40); > } > {code} > Produces: > {code} > java.lang.OutOfMemoryError: Java heap space > at java.util.Arrays.copyOf(Arrays.java:3332) > at > java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) > at > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) > at java.lang.StringBuilder.append(StringBuilder.java:136) > at > org.apache.hadoop.hive.conf.SystemVariables.substitute(SystemVariables.java:110) > at > org.apache.hadoop.hive.conf.SystemVariablesTest.testSubstitute(SystemVariablesTest.java:27) > {code} > We should check the size of the substituted query and bail out earlier. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HIVE-22901) Variable substitution can lead to OOM on circular references
[ https://issues.apache.org/jira/browse/HIVE-22901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-22901 started by Daniel Voros. --- > Variable substitution can lead to OOM on circular references > > > Key: HIVE-22901 > URL: https://issues.apache.org/jira/browse/HIVE-22901 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.2 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-22901.1.patch > > > {{SystemVariables#substitute()}} is dealing with circular references between > variables by only doing the substitution 40 times by default. If the > substituted part is sufficiently large though, it's possible that the > substitution will produce a string bigger than the heap size within the 40 > executions. > Take the following test case that fails with OOM in current master (third > round of execution would need 10G heap, while running with only 2G): > {code} > @Test > public void testSubstitute() { > String randomPart = RandomStringUtils.random(100_000); > String reference = "${hiveconf:myTestVariable}"; > StringBuilder longStringWithReferences = new StringBuilder(); > for(int i = 0; i < 10; i ++) { > longStringWithReferences.append(randomPart).append(reference); > } > SystemVariables uut = new SystemVariables(); > HiveConf conf = new HiveConf(); > conf.set("myTestVariable", longStringWithReferences.toString()); > uut.substitute(conf, longStringWithReferences.toString(), 40); > } > {code} > Produces: > {code} > java.lang.OutOfMemoryError: Java heap space > at java.util.Arrays.copyOf(Arrays.java:3332) > at > java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) > at > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) > at java.lang.StringBuilder.append(StringBuilder.java:136) > at > org.apache.hadoop.hive.conf.SystemVariables.substitute(SystemVariables.java:110) > at > org.apache.hadoop.hive.conf.SystemVariablesTest.testSubstitute(SystemVariablesTest.java:27) > {code} > We should check the size of the substituted query and bail out earlier. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22785) Update/delete/merge statements not optimized through CBO
[ https://issues.apache.org/jira/browse/HIVE-22785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-22785: -- Status: Patch Available (was: Open) > Update/delete/merge statements not optimized through CBO > > > Key: HIVE-22785 > URL: https://issues.apache.org/jira/browse/HIVE-22785 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Krisztian Kasa >Priority: Critical > Attachments: HIVE-22785.1.patch, HIVE-22785.2.patch, > HIVE-22785.2.patch > > > Currently, CBO is bypassed for update/delete/merge statements. > To support optimizing these statements through CBO, we need to complete three > main tasks: 1) support for sort in Calcite planner, 2) support for SORT in > AST converter, and 3) {{RewriteSemanticAnalyzer}} should extend > {{CalcitePlanner}} instead of {{SemanticAnalyzer}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22785) Update/delete/merge statements not optimized through CBO
[ https://issues.apache.org/jira/browse/HIVE-22785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-22785: -- Status: Open (was: Patch Available) > Update/delete/merge statements not optimized through CBO > > > Key: HIVE-22785 > URL: https://issues.apache.org/jira/browse/HIVE-22785 > Project: Hive > Issue Type: Improvement > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Krisztian Kasa >Priority: Critical > Attachments: HIVE-22785.1.patch, HIVE-22785.2.patch, > HIVE-22785.2.patch > > > Currently, CBO is bypassed for update/delete/merge statements. > To support optimizing these statements through CBO, we need to complete three > main tasks: 1) support for sort in Calcite planner, 2) support for SORT in > AST converter, and 3) {{RewriteSemanticAnalyzer}} should extend > {{CalcitePlanner}} instead of {{SemanticAnalyzer}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)