[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17012350#comment-17012350 ] Jesus Camacho Rodriguez commented on HIVE-22595: +1 > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch, > HIVE-22595.3.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) > ... 19 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Number of input > columns was different than output columns (in = 2 vs out = 1 > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1047) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Op
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17008999#comment-17008999 ] Jason Dere commented on HIVE-22595: --- [~jcamachorodriguez] [~ashutoshc] can you review this one? > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch, > HIVE-22595.3.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) > ... 19 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Number of input > columns was different than output columns (in = 2 vs out = 1 > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1047) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hi
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17007879#comment-17007879 ] Hive QA commented on HIVE-22595: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12989911/HIVE-22595.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 17787 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/20073/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20073/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20073/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12989911 - PreCommit-HIVE-Build > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch, > HIVE-22595.3.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) > at > org.apache.hadoop.hiv
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17007859#comment-17007859 ] Hive QA commented on HIVE-22595: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 4s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} serde in master has 198 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 8s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} serde: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-20073/dev-support/hive-personality.sh | | git revision | master / d981e8d | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-20073/yetus/diff-checkstyle-serde.txt | | modules | C: serde ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-20073/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch, > HIVE-22595.3.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The la
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17007793#comment-17007793 ] Jason Dere commented on HIVE-22595: --- Updating the patch so that avro_extschema_insert is only run by TestMiniLlapLocalCliDriver, and hoping that the TestHiveCli failures are due to HIVE-22649 > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch, > HIVE-22595.3.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) > ... 19 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Number of input > columns was different than output columns (in = 2 vs out = 1 > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1047) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at > org.
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16996435#comment-16996435 ] Hive QA commented on HIVE-22595: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12988840/HIVE-22595.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 17780 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_extschema_insert] (batchId=5) org.apache.hive.beeline.cli.TestHiveCli.testCmd (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testCommentStripping (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testDatabaseOptions (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testErrOutput (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testHelp (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testInValidCmd (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testInvalidDatabaseOptions (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testInvalidOptions (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testInvalidOptions2 (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testSetHeaderValue (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testSetPromptValue (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testSourceCmd (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testSourceCmd2 (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testSourceCmd3 (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testSourceCmd4 (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testSqlFromCmd (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testSqlFromCmdWithDBName (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testUseCurrentDB1 (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testUseCurrentDB2 (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testUseCurrentDB3 (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testUseInvalidDB (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testVariables (batchId=206) org.apache.hive.beeline.cli.TestHiveCli.testVariablesForSource (batchId=206) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19937/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19937/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19937/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 25 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12988840 - PreCommit-HIVE-Build > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessCon
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16996391#comment-16996391 ] Hive QA commented on HIVE-22595: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 38s{color} | {color:blue} serde in master has 198 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 51s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s{color} | {color:red} serde: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19937/dev-support/hive-personality.sh | | git revision | master / f378aa4 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-19937/yetus/diff-checkstyle-serde.txt | | modules | C: serde ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19937/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16995988#comment-16995988 ] Jason Dere commented on HIVE-22595: --- Looks like a bucketing column can get added to the end which makes patch v1 incorrect. Attaching patch v2 - this tries to fix this by having AvroSerDe update the column name/type properties when the serde is initialized. This should fix the behavior of the original call to Utilities.getDPColOffset(). > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) > ... 19 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Number of input > columns was different than output columns (in = 2 vs out = 1 > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1047) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Ope
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16991184#comment-16991184 ] Hive QA commented on HIVE-22595: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12988284/HIVE-22595.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 80 failed/errored test(s), 17764 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[buckets] (batchId=304) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions] (batchId=304) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions] (batchId=304) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_buckets] (batchId=304) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[parquet_buckets] (batchId=304) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_buckets] (batchId=304) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_subquery] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_extschema_insert] (batchId=5) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_transactional_full_acid] (batchId=84) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_transactional_insert_only] (batchId=66) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_all_partitioned] (batchId=31) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_where_partitioned] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynamic_partition_insert] (batchId=62) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explain_locks] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_acid_dynamic_partition] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_into_with_schema2] (batchId=43) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_with_move_files_from_source_dir] (batchId=96) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part2] (batchId=65) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_static_ptn_into_bucketed_table] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=67) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[update_all_partitioned] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[update_where_partitioned] (batchId=69) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_dynamic] (batchId=193) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_dp] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_16] (batchId=178) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_opt_vectorization] (batchId=176) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization] (batchId=177) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_smb] (batchId=189) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[load_data_using_job] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[murmur_hash_migration] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_analyze] (batchId=165) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[auto_sortmerge_join_16] (batchId=196) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_16] (batchId=139) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[dynpart_sort_optimization] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[load_dyn_part2] (batchId=141) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[sample10] (batchId=136) org.apache.hadoop.hive.ql.TestTxnCommands2.testDynamicPartitionsMerge (batchId=337) org.apache.hadoop.hive.ql.TestTxnCommands2.testDynamicPartitionsMerge2 (batchId=337) org.apache.hadoop.hive.ql.TestTxnCommands2.testMultiInsert (batchId=337) org.apache.hadoop.hive.ql.TestTxnCommands2.updateDeletePartitioned (batchId=337) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testDynamicPartitionsMerge (batchId=351) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testDynamicPartitionsMerge2 (batchId=351) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testMultiInsert (batchId=351) org.apache.hadoop.hive.ql.TestTxnCommands
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16991162#comment-16991162 ] Hive QA commented on HIVE-22595: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 1s{color} | {color:blue} ql in master has 1532 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19827/dev-support/hive-personality.sh | | git revision | master / a245e79 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19827/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hi
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16991076#comment-16991076 ] Jason Dere commented on HIVE-22595: --- Patch v1 removes Utilities.getDPColOffset() which is not correct when with external schema, plus testcase. > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-22595.1.patch > > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) > ... 19 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Number of input > columns was different than output columns (in = 2 vs out = 1 > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1047) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.had
[jira] [Commented] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema
[ https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16990284#comment-16990284 ] Jason Dere commented on HIVE-22595: --- I think this was introduced by HIVE-11972, but I guess we never ran into this one until now. In the case of an external schema we cannot rely on the column list from the TableDesc properties to give the correct number of columns. > Dynamic partition inserts fail on Avro table table with external schema > --- > > Key: HIVE-22595 > URL: https://issues.apache.org/jira/browse/HIVE-22595 > Project: Hive > Issue Type: Bug > Components: Avro, Serializers/Deserializers >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > Example qfile test: > {noformat} > create external table avro_extschema_insert1 (name string) partitioned by (p1 > string) > stored as avro tblproperties > ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc'); > create external table avro_extschema_insert2 like avro_extschema_insert1; > insert overwrite table avro_extschema_insert1 partition (p1='part1') values > ('col1_value', 1, 'col3_value'); > insert overwrite table avro_extschema_insert2 partition (p1) select * from > avro_extschema_insert1; > {noformat} > The last statement fails with the following error: > {noformat} > ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : > attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: > java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: > Hive Runtime Error while processing row > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) > ... 16 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92) > ... 19 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Number of input > columns was different than output columns (in = 2 vs out = 1 > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1047) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at > org.apache.hadoo