[jira] [Commented] (HIVE-18653) Fix TestOperators test failure in master
[ https://issues.apache.org/jira/browse/HIVE-18653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356619#comment-16356619 ] Deepak Jaiswal commented on HIVE-18653: --- +1 pending test results. > Fix TestOperators test failure in master > > > Key: HIVE-18653 > URL: https://issues.apache.org/jira/browse/HIVE-18653 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18653.1.patch > > > HIVE-17848 is causing TestOperators#testNoConditionalTaskSizeForLlap to fail > in master. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18238) Driver execution may not have configuration changing sideeffects
[ https://issues.apache.org/jira/browse/HIVE-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-18238: Attachment: HIVE-18238.09.patch > Driver execution may not have configuration changing sideeffects > - > > Key: HIVE-18238 > URL: https://issues.apache.org/jira/browse/HIVE-18238 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-18238.01wip01.patch, HIVE-18238.02.patch, > HIVE-18238.03.patch, HIVE-18238.04.patch, HIVE-18238.04wip01.patch, > HIVE-18238.07.patch, HIVE-18238.08.patch, HIVE-18238.09.patch > > > {{Driver}} executes sql statements which use "hiveconf" settings; > but the {{Driver}} itself may *not* change the configuration... > I've found an example; which shows how hazardous this is... > {code} > set hive.mapred.mode=strict; > select "${hiveconf:hive.mapred.mode}"; > create table t (a int); > analyze table t compute statistics; > select "${hiveconf:hive.mapred.mode}"; > {code} > currently; the last select returns {{nonstrict}} because of > [this|https://github.com/apache/hive/blob/7ddd915bf82a68c8ab73b0c4ca409f1a6d43d227/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L1696] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18626) Repl load "with" clause does not pass config to tasks
[ https://issues.apache.org/jira/browse/HIVE-18626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-18626: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Test failures are not related. Patch pushed to master. > Repl load "with" clause does not pass config to tasks > - > > Key: HIVE-18626 > URL: https://issues.apache.org/jira/browse/HIVE-18626 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18626.1.patch, HIVE-18626.2.patch, > HIVE-18626.3.patch > > > The "with" clause in repl load suppose to pass custom hive config entries to > replication. However, the config is only effective in > BootstrapEventsIterator, but not the generated tasks (such as MoveTask, > DDLTask). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17063) insert overwrite partition onto a external table fail when drop partition first
[ https://issues.apache.org/jira/browse/HIVE-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356627#comment-16356627 ] Deepak Jaiswal commented on HIVE-17063: --- The test failures are unrelated. Pushed to master. Thanks [~ashutoshc] for the review. > insert overwrite partition onto a external table fail when drop partition > first > --- > > Key: HIVE-17063 > URL: https://issues.apache.org/jira/browse/HIVE-17063 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.2, 2.1.1, 2.2.0 >Reporter: Wang Haihua >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-17063.1.patch, HIVE-17063.2.patch, > HIVE-17063.3.patch, HIVE-17063.4.patch > > > The default value of {{hive.exec.stagingdir}} which is a relative path, and > also drop partition on a external table will not clear the real data. As a > result, insert overwrite partition twice will happen to fail because of the > target data to be moved has > already existed. > This happened when we reproduce partition data onto a external table. > I see the target data will not be cleared only when {{immediately generated > data}} is child of {{the target data directory}}, so my proposal is trying > to clear target file already existed finally whe doing rename {{immediately > generated data}} into {{the target data directory}} > Operation reproduced: > {code} > create external table insert_after_drop_partition(key string, val string) > partitioned by (insertdate string); > from src insert overwrite table insert_after_drop_partition partition > (insertdate='2008-01-01') select *; > alter table insert_after_drop_partition drop partition > (insertdate='2008-01-01'); > from src insert overwrite table insert_after_drop_partition partition > (insertdate='2008-01-01') select *; > {code} > Stack trace: > {code} > 2017-07-09T08:32:05,212 ERROR [f3bc51c8-2441-4689-b1c1-d60aef86c3aa main] > exec.Task: Failed with exception java.io.IOException: rename for src path: > pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/.hive-staging_hive_2017-07-09_08-32-03_840_4046825276907030554-1/-ext-1/00_0 > to dest > path:pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/00_0 > returned false > org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename > for src path: > pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/.hive-staging_hive_2017-07-09_08-32-03_840_4046825276907030554-1/-ext-1/00_0 > to dest > path:pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/00_0 > returned false > at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2992) > at > org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3248) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1532) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1461) > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:498) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1137) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:) > at > org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:120) > at > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_after_drop_partition(TestCliDriver.java:103) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(Delegating
[jira] [Commented] (HIVE-18350) load data should rename files consistent with insert statements
[ https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356626#comment-16356626 ] Hive QA commented on HIVE-18350: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909645/HIVE-18350.16.patch {color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 12970 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query39] (batchId=250) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9087/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9087/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9087/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 21 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909645 - PreCommit-HIVE-Build > load data should rename files consistent with insert statements > --- > > Key: HIVE-18350 > URL: https://issues.apache.org/jira/browse/HIVE-18350 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18350.1.patch, HIVE-18350.10.patch, > HIVE-18350.11.patch, HIVE-18350.12.patch, HIVE-18350.13.patch, > HIVE-18350.14.patch, HIVE-18350.15.patch, HIVE-18350.16.patch, > HIVE-18350.2.patch, HIVE-18350.3.patch, HIVE-18350.4.patch, > HIVE-18350.5.patch, HIVE-18350.6.patch, HIVE-18350.7.patch, > HIVE-18350.8.patch, HIVE-18350.9.patch > > > Insert statements create files of format ending with _0, 0001_0 etc. > However, the load data uses the input file name. That results in inconsistent > naming convention which makes SMB joins difficult in some scenarios and may > cause trouble for other types of queries in future. > We need consistent naming convention. > For non-bucketed table, hive renames all the files regardless of how they > were named by the user. > For bucketed table, hive relies on user to name the files matching the > bucket in non-strict mode. Hive assumes that the data belongs to same bucket > in a file. In strict mode, loading bucketed table is disabled. > This will likely affect most of the tests which load data which is pretty > significant due to which it is further divided into two subtasks for smoother > merge. > For existing tables in customer database, it is recommended to reload > bucketed tables otherwise if customer tries to run SMB join and there is a > bucket for which there is no split, then there is a possibility of getting > incorrect results. However, this is not a regression as it would happ
[jira] [Issue Comment Deleted] (HIVE-17063) insert overwrite partition onto a external table fail when drop partition first
[ https://issues.apache.org/jira/browse/HIVE-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-17063: -- Comment: was deleted (was: The test failures are unrelated. Pushed to master. Thanks [~ashutoshc] for the review.) > insert overwrite partition onto a external table fail when drop partition > first > --- > > Key: HIVE-17063 > URL: https://issues.apache.org/jira/browse/HIVE-17063 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.2, 2.1.1, 2.2.0 >Reporter: Wang Haihua >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-17063.1.patch, HIVE-17063.2.patch, > HIVE-17063.3.patch, HIVE-17063.4.patch > > > The default value of {{hive.exec.stagingdir}} which is a relative path, and > also drop partition on a external table will not clear the real data. As a > result, insert overwrite partition twice will happen to fail because of the > target data to be moved has > already existed. > This happened when we reproduce partition data onto a external table. > I see the target data will not be cleared only when {{immediately generated > data}} is child of {{the target data directory}}, so my proposal is trying > to clear target file already existed finally whe doing rename {{immediately > generated data}} into {{the target data directory}} > Operation reproduced: > {code} > create external table insert_after_drop_partition(key string, val string) > partitioned by (insertdate string); > from src insert overwrite table insert_after_drop_partition partition > (insertdate='2008-01-01') select *; > alter table insert_after_drop_partition drop partition > (insertdate='2008-01-01'); > from src insert overwrite table insert_after_drop_partition partition > (insertdate='2008-01-01') select *; > {code} > Stack trace: > {code} > 2017-07-09T08:32:05,212 ERROR [f3bc51c8-2441-4689-b1c1-d60aef86c3aa main] > exec.Task: Failed with exception java.io.IOException: rename for src path: > pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/.hive-staging_hive_2017-07-09_08-32-03_840_4046825276907030554-1/-ext-1/00_0 > to dest > path:pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/00_0 > returned false > org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename > for src path: > pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/.hive-staging_hive_2017-07-09_08-32-03_840_4046825276907030554-1/-ext-1/00_0 > to dest > path:pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/00_0 > returned false > at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2992) > at > org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3248) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1532) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1461) > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:498) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1137) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:) > at > org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:120) > at > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_after_drop_partition(TestCliDriver.java:103) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:4
[jira] [Updated] (HIVE-17063) insert overwrite partition onto a external table fail when drop partition first
[ https://issues.apache.org/jira/browse/HIVE-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-17063: -- Resolution: Fixed Status: Resolved (was: Patch Available) The test failures are unrelated. Pushed to master. Thanks [~ashutoshc] for the review. > insert overwrite partition onto a external table fail when drop partition > first > --- > > Key: HIVE-17063 > URL: https://issues.apache.org/jira/browse/HIVE-17063 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.2, 2.1.1, 2.2.0 >Reporter: Wang Haihua >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-17063.1.patch, HIVE-17063.2.patch, > HIVE-17063.3.patch, HIVE-17063.4.patch > > > The default value of {{hive.exec.stagingdir}} which is a relative path, and > also drop partition on a external table will not clear the real data. As a > result, insert overwrite partition twice will happen to fail because of the > target data to be moved has > already existed. > This happened when we reproduce partition data onto a external table. > I see the target data will not be cleared only when {{immediately generated > data}} is child of {{the target data directory}}, so my proposal is trying > to clear target file already existed finally whe doing rename {{immediately > generated data}} into {{the target data directory}} > Operation reproduced: > {code} > create external table insert_after_drop_partition(key string, val string) > partitioned by (insertdate string); > from src insert overwrite table insert_after_drop_partition partition > (insertdate='2008-01-01') select *; > alter table insert_after_drop_partition drop partition > (insertdate='2008-01-01'); > from src insert overwrite table insert_after_drop_partition partition > (insertdate='2008-01-01') select *; > {code} > Stack trace: > {code} > 2017-07-09T08:32:05,212 ERROR [f3bc51c8-2441-4689-b1c1-d60aef86c3aa main] > exec.Task: Failed with exception java.io.IOException: rename for src path: > pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/.hive-staging_hive_2017-07-09_08-32-03_840_4046825276907030554-1/-ext-1/00_0 > to dest > path:pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/00_0 > returned false > org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename > for src path: > pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/.hive-staging_hive_2017-07-09_08-32-03_840_4046825276907030554-1/-ext-1/00_0 > to dest > path:pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/00_0 > returned false > at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2992) > at > org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3248) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1532) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1461) > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:498) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1137) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:) > at > org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:120) > at > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_after_drop_partition(TestCliDriver.java:103) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.inv
[jira] [Commented] (HIVE-18622) Vectorization: IF Statements, Comparisons, and more do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356631#comment-16356631 ] Matt McCline commented on HIVE-18622: - [~vihangk1] This problem causes wrong query results. Backporting will be manual and very tedious. > Vectorization: IF Statements, Comparisons, and more do not handle NULLs > correctly > - > > Key: HIVE-18622 > URL: https://issues.apache.org/jira/browse/HIVE-18622 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18622.03.patch > > > > Many vector expression classes are missing guards around setting noNulls > among other things. > {code:java} > // Carefully update noNulls... > if (outputColVector.noNulls) { > outputColVector.noNulls = inputColVector.noNulls; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18624) Parsing time is extremely high (~10 min) for queries with complex select expressions
[ https://issues.apache.org/jira/browse/HIVE-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amruth S updated HIVE-18624: Description: Explain of the same query takes 0.1 to 3 seconds in hive 2.1.0 & 10-15 min in hive 2.3.2 & latest master Sample expression below {code:java} EXPLAIN SELECT DISTINCT IF(lower('a') <= lower('a') ,'a' ,IF(('a' IS NULL AND from_unixtime(UNIX_TIMESTAMP()) <= 'a') ,'a' ,IF(if('a' = 'a', TRUE, FALSE) = 1 ,'a' ,IF(('a' = 1 and lower('a') NOT IN ('a', 'a') and lower(if('a' = 'a','a','a')) <= lower('a')) OR ('a' like 'a' OR 'a' like 'a') OR 'a' in ('a','a') ,'a' ,IF(if(lower('a') in ('a', 'a') and 'a'='a', TRUE, FALSE) = 1 ,'a' ,IF('a'='a' and unix_timestamp(if('a' = 'a',cast('a' as string),coalesce('a',cast('a' as string),from_unixtime(unix_timestamp() <= unix_timestamp(concat_ws('a',cast(lower('a') as string),'00:00:00')) + 9*3600 ,'a' ,If(lower('a') <= lower('a') and if(lower('a') in ('a', 'a') and 'a'<>'a', TRUE, FALSE) <> 1 ,'a' ,IF('a'=1 AND 'a'=1 ,'a' ,IF('a' = 1 and COALESCE(cast('a' as int),0) = 0 ,'a' ,IF('a' = 'a' ,'a' ,If('a' = 'a' AND lower('a')>lower(if(lower('a')<1830,'a',cast(date_add('a',1) as timestamp))) ,'a' ,IF('a' = 1 ,IF('a' in ('a', 'a') and ((unix_timestamp('a')-unix_timestamp('a')) / 60) > 30 and 'a' = 1 ,'a', 'a') ,IF(if('a' = 'a', FALSE, TRUE ) = 1 AND 'a' IS NULL ,'a' ,IF('a' = 1 and 'a'>0 , 'a' ,IF('a' = 1 AND 'a' ='a' ,'a' ,IF('a' is not null and 'a' is not null and 'a' > 'a' ,'a' ,IF('a' = 1 ,'a' ,IF('a' = 'a' ,'a' ,If('a' = 1 ,'a' ,IF('a' = 1 ,'a' ,IF('a' = 1 ,'a' ,IF('a' ='a' and 'a' ='a' and cast(unix_timestamp('a') as int) + 93600 < cast(unix_timestamp() as int) ,'a' ,IF('a' = 'a' ,'a' ,IF('a' = 'a' and 'a' in ('a','a','a') ,'a' ,IF('a' = 'a' ,'a','a')) ))) AS test_comp_exp {code} Taking a look at [^thread_dump] shows a very large function stack getting created. Reverting HIVE-15578 (92f31d07aa988d4a460aac56e369bfa386361776) seem to speed up the parsing. was: Explain of the same query takes 0.1 to 3 seconds in hive 2.1.0 & 10-15 min in hive 2.3.2 & latest master Sample expression below {code:java} EXPLAIN SELECT DISTINCT IF(lower('a') <= lower('a') ,'a' ,IF(('a' IS NULL AND from_unixtime(UNIX_TIMESTAMP()) <= 'a') ,'a' ,IF(if('a' = 'a', TRUE, FALSE) = 1 ,'a' ,IF(('a' = 1 and lower('a') NOT IN ('a', 'a') and lower(if('a' = 'a','a','a')) <= lower('a')) OR ('a' like 'a' OR 'a' like 'a') OR 'a' in ('a','a') ,'a' ,IF(if(lower('a') in ('a', 'a') and 'a'='a', TRUE, FALSE) = 1 ,'a' ,IF('a'='a' and unix_timestamp(if('a' = 'a',cast('a' as string),coalesce('a',cast('a' as string),from_unixtime(unix_timestamp() <= unix_timestamp(concat_ws('a',cast(lower('a') as string),'00:00:00')) + 9*3600 ,'a' ,If(lower('a') <= lower('a') and if(lower('a') in ('a', 'a') and 'a'<>'a', TRUE, FALSE) <> 1 ,'a' ,IF('a'=1 AND 'a'=1 ,'a' ,IF('a' = 1 and COALESCE(cast('a' as int),0) = 0 ,'a' ,IF('a' = 'a' ,'a' ,If('a' = 'a' AND lower('a')>lower(if(lower('a')<1830,'a',cast(date_add('a',1) as timestamp))) ,'a' ,IF('a' = 1 ,IF('a' in ('a', 'a') and ((unix_timestamp('a')-unix_timestamp('a')) / 60) > 30 and 'a' = 1 ,'a', 'a') ,IF(if('a' = 'a', FALSE, TRUE ) = 1 AND 'a' IS NULL ,'a' ,IF('a' = 1 and 'a'>0 , 'a' ,IF('a' = 1 AND 'a' ='a' ,'a' ,IF('a' is not null and 'a' is not null and 'a' > 'a' ,'a' ,IF('a' = 1 ,'a' ,IF('a' = 'a' ,'a' ,If('a' = 1 ,'a' ,IF('a' = 1 ,'a' ,IF('a' = 1 ,'a' ,IF('a' ='a' and 'a' ='a' and cast(unix_timestamp('a') as int) + 93600 < cast(unix_timestamp() as int) ,'a' ,IF('a' = 'a' ,'a' ,IF('a' = 'a' and 'a' in ('a','a','a') ,'a' ,IF('a' = 'a' ,'a','a')) ))) AS test_comp_exp {code} Taking a look at [^thread_dump] shows a very large function stack getting created. > Parsing time is extremely high (~10 min) for queries with complex select > expressions > > > Key: HIVE-18624 > URL: https://issues.apache.org/jira/browse/HIVE-18624 > Project: Hive > Issue Type: Bug > Components: Hive, Parser >Affects Versions: 2.3.2 >Reporter: Amruth S >Priority: Major > Attachments: thread_dump > > > Explain of the same query takes > 0.1 to 3 seconds in hive 2.1.0 & > 10-15 min in hive 2.3.2 & latest master > Sample expression below > {code:java} > EXPLAIN > SELECT DISTINCT > IF(lower('a') <= lower('a') > ,'a' > ,IF(('a' IS NULL AND from_unixtime(UNIX_TIMESTAMP()) <= 'a') > ,'a' > ,IF(if('a' = 'a', TRUE, FALSE) = 1 > ,'a' > ,IF(('a' = 1 and lower(
[jira] [Commented] (HIVE-18238) Driver execution may not have configuration changing sideeffects
[ https://issues.apache.org/jira/browse/HIVE-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356655#comment-16356655 ] Hive QA commented on HIVE-18238: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 53s{color} | {color:red} ql: The patch generated 12 new + 1708 unchanged - 9 fixed = 1720 total (was 1717) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s{color} | {color:red} cli: The patch generated 1 new + 34 unchanged - 0 fixed = 35 total (was 34) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} hcatalog/core: The patch generated 1 new + 32 unchanged - 2 fixed = 33 total (was 34) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 1faadb0 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9088/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9088/yetus/diff-checkstyle-cli.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9088/yetus/diff-checkstyle-hcatalog_core.txt | | modules | C: ql service cli hcatalog/core hcatalog/hcatalog-pig-adapter itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9088/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Driver execution may not have configuration changing sideeffects > - > > Key: HIVE-18238 > URL: https://issues.apache.org/jira/browse/HIVE-18238 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-18238.01wip01.patch, HIVE-18238.02.patch, > HIVE-18238.03.patch, HIVE-18238.04.patch, HIVE-18238.04wip01.patch, > HIVE-18238.07.patch, HIVE-18238.08.patch, HIVE-18238.09.patch > > > {{Driver}} executes sql statements which use "hiveconf" settings; > but the {{Driver}} itself may *not* change t
[jira] [Updated] (HIVE-18350) load data should rename files consistent with insert statements
[ https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18350: -- Resolution: Fixed Status: Resolved (was: Patch Available) The failures are unrelated, pushed to master. Thanks [~sershe] and [~ashutoshc] for the reviews. > load data should rename files consistent with insert statements > --- > > Key: HIVE-18350 > URL: https://issues.apache.org/jira/browse/HIVE-18350 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18350.1.patch, HIVE-18350.10.patch, > HIVE-18350.11.patch, HIVE-18350.12.patch, HIVE-18350.13.patch, > HIVE-18350.14.patch, HIVE-18350.15.patch, HIVE-18350.16.patch, > HIVE-18350.2.patch, HIVE-18350.3.patch, HIVE-18350.4.patch, > HIVE-18350.5.patch, HIVE-18350.6.patch, HIVE-18350.7.patch, > HIVE-18350.8.patch, HIVE-18350.9.patch > > > Insert statements create files of format ending with _0, 0001_0 etc. > However, the load data uses the input file name. That results in inconsistent > naming convention which makes SMB joins difficult in some scenarios and may > cause trouble for other types of queries in future. > We need consistent naming convention. > For non-bucketed table, hive renames all the files regardless of how they > were named by the user. > For bucketed table, hive relies on user to name the files matching the > bucket in non-strict mode. Hive assumes that the data belongs to same bucket > in a file. In strict mode, loading bucketed table is disabled. > This will likely affect most of the tests which load data which is pretty > significant due to which it is further divided into two subtasks for smoother > merge. > For existing tables in customer database, it is recommended to reload > bucketed tables otherwise if customer tries to run SMB join and there is a > bucket for which there is no split, then there is a possibility of getting > incorrect results. However, this is not a regression as it would happen even > without the patch. > With this patch however, and reloading data, the results should be correct. > For non-bucketed tables and external tables, there is no difference in > behavior and reloading data is not needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18238) Driver execution may not have configuration changing sideeffects
[ https://issues.apache.org/jira/browse/HIVE-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356717#comment-16356717 ] Hive QA commented on HIVE-18238: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909745/HIVE-18238.09.patch {color:green}SUCCESS:{color} +1 due to 11 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 12996 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestDatabases.testGetAllDatabases[Embedded] (batchId=213) org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded] (batchId=206) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=235) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9088/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9088/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9088/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 26 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909745 - PreCommit-HIVE-Build > Driver execution may not have configuration changing sideeffects > - > > Key: HIVE-18238 > URL: https://issues.apache.org/jira/browse/HIVE-18238 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-18238.01wip01.patch, HIVE-18238.02.patch, > HIVE-18238.03.patch, HIVE-18238.04.patch, HIVE-18238.04wip01.patch, > HIVE-18238.07.patch, HIVE-18238.08.patch, HIVE-18238.09.patch > > > {{Driver}} executes sql statements which use "hiveconf" settings; > but the {{Driver}} itself may *not* change the configuration... > I've found an example; which shows how hazardous this is... > {code} > set hive.mapred.mode=strict; > select "${hiveconf:hive.mapred.mode}"; > create table t (a int); > analyze table t compute statistics; > select "${hiveconf:hive.mapred.mode}"; > {code} > currently; the last select returns {{nonstrict}} because of > [this|https://github.com/apache/hive/blob/7ddd915bf82a68c8ab73b0c4ca409f1a6d43d227/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L1696] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18622) Vectorization: IF Statements, Comparisons, and more do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18622: Status: In Progress (was: Patch Available) > Vectorization: IF Statements, Comparisons, and more do not handle NULLs > correctly > - > > Key: HIVE-18622 > URL: https://issues.apache.org/jira/browse/HIVE-18622 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18622.03.patch > > > > Many vector expression classes are missing guards around setting noNulls > among other things. > {code:java} > // Carefully update noNulls... > if (outputColVector.noNulls) { > outputColVector.noNulls = inputColVector.noNulls; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18622) Vectorization: IF Statements, Comparisons, and more do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18622: Attachment: HIVE-18622.04.patch > Vectorization: IF Statements, Comparisons, and more do not handle NULLs > correctly > - > > Key: HIVE-18622 > URL: https://issues.apache.org/jira/browse/HIVE-18622 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18622.03.patch, HIVE-18622.04.patch > > > > Many vector expression classes are missing guards around setting noNulls > among other things. > {code:java} > // Carefully update noNulls... > if (outputColVector.noNulls) { > outputColVector.noNulls = inputColVector.noNulls; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-14171) Parquet: Simple vectorization throws NPEs
[ https://issues.apache.org/jira/browse/HIVE-14171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356734#comment-16356734 ] KaiXu commented on HIVE-14171: -- Thanks [~colinma] for the information. To [~vihangk1], several queries(e.g. q22, q64, q75, q80, q85) of TPC-DS hits java.lang.OutOfMemoryError: Java heap space, when set to false. It's OK with TXT file, the configuration is the same. java.lang.OutOfMemoryError: Java heap space at org.apache.hadoop.hive.serde2.WriteBuffers.nextBufferToWrite(WriteBuffers.java:246) at org.apache.hadoop.hive.serde2.WriteBuffers.write(WriteBuffers.java:222) at org.apache.hadoop.hive.serde2.WriteBuffers.write(WriteBuffers.java:207) at org.apache.hadoop.hive.ql.exec.persistence.BytesBytesMultiHashMap.put(BytesBytesMultiHashMap.java:422) at org.apache.hadoop.hive.ql.exec.persistence.MapJoinBytesTableContainer.putRow(MapJoinBytesTableContainer.java:395) at org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.loadOptimized(MapJoinTableContainerSerDe.java:200) at org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:152) at org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:169) at org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:148) at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:315) at org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:187) at org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:183) at org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieve(ObjectCache.java:60) at org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieveAsync(ObjectCache.java:68) at org.apache.hadoop.hive.ql.exec.ObjectCacheWrapper.retrieveAsync(ObjectCacheWrapper.java:51) at org.apache.hadoop.hive.ql.exec.MapJoinOperator.initializeOp(MapJoinOperator.java:181) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:366) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:556) at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:508) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376) at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.init(SparkReduceRecordHandler.java:200) at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunction.call(HiveReduceFunction.java:46) at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunction.call(HiveReduceFunction.java:28) at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:185) at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:185) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:785) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) > Parquet: Simple vectorization throws NPEs > - > > Key: HIVE-14171 > URL: https://issues.apache.org/jira/browse/HIVE-14171 > Project: Hive > Issue Type: Bug > Components: File Formats, Vectorization >Affects Versions: 2.2.0 >Reporter: Gopal V >Priority: Major > Labels: Parquet > > {code} > create temporary table cd_parquet stored as parquet as select * from > customer_demographics; > select count(1) from cd_parquet where cd_gender = 'F'; > {code} > {code} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:206) > at > org.apache.hadoop.hive.ql.io.parquet.VectorizedParquetInputFormat$VectorizedParquetRecordReader.next(VectorizedParquetInputFormat.java:118) > at > org.apache.hadoop.hive.ql.io.parquet.VectorizedParquetInputFormat$VectorizedParquetRecordReader.next(VectorizedParquetInputFormat.java:51) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 17 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18622) Vectorization: IF Statements, Comparisons, and more do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18622: Description: Many vector expression classes are setting noNulls to true which does not work if the VRB is a scratch column being reused. The previous use may have set noNulls to false and the isNull array will have some rows marked as NULL. The result is wrong query results and sometimes NPEs (for BytesColumnVector). So, many vector expressions need this: {code:java} // Carefully handle NULLs... /* * For better performance on LONG/DOUBLE we don't want the conditional * statements inside the for loop. */ outputColVector.noNulls = false; {code} And, vector expressions need to make sure the isNull array is set when outputColVector.noNulls is false. And, all place that assign column value need to set noNulls to false when the value is NULL. was: Many vector expression classes are missing guards around setting noNulls among other things. {code:java} // Carefully update noNulls... if (outputColVector.noNulls) { outputColVector.noNulls = inputColVector.noNulls; } {code} > Vectorization: IF Statements, Comparisons, and more do not handle NULLs > correctly > - > > Key: HIVE-18622 > URL: https://issues.apache.org/jira/browse/HIVE-18622 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18622.03.patch, HIVE-18622.04.patch > > > > Many vector expression classes are setting noNulls to true which does not > work if the VRB is a scratch column being reused. The previous use may have > set noNulls to false and the isNull array will have some rows marked as NULL. > The result is wrong query results and sometimes NPEs (for BytesColumnVector). > So, many vector expressions need this: > {code:java} > // Carefully handle NULLs... > /* >* For better performance on LONG/DOUBLE we don't want the conditional >* statements inside the for loop. >*/ > outputColVector.noNulls = false; > {code} > And, vector expressions need to make sure the isNull array is set when > outputColVector.noNulls is false. > And, all place that assign column value need to set noNulls to false when the > value is NULL. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18622) Vectorization: IF Statements, Comparisons, and more do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18622: Description: Many vector expression classes are setting noNulls to true which does not work if the VRB is a scratch column being reused. The previous use may have set noNulls to false and the isNull array will have some rows marked as NULL. The result is wrong query results and sometimes NPEs (for BytesColumnVector). So, many vector expressions need this: {code:java} // Carefully handle NULLs... /* * For better performance on LONG/DOUBLE we don't want the conditional * statements inside the for loop. */ outputColVector.noNulls = false; {code} And, vector expressions need to make sure the isNull array entry is set when outputColVector.noNulls is false. And, all place that assign column value need to set noNulls to false when the value is NULL. Almost all cases where noNulls is set to true are incorrect. was: Many vector expression classes are setting noNulls to true which does not work if the VRB is a scratch column being reused. The previous use may have set noNulls to false and the isNull array will have some rows marked as NULL. The result is wrong query results and sometimes NPEs (for BytesColumnVector). So, many vector expressions need this: {code:java} // Carefully handle NULLs... /* * For better performance on LONG/DOUBLE we don't want the conditional * statements inside the for loop. */ outputColVector.noNulls = false; {code} And, vector expressions need to make sure the isNull array is set when outputColVector.noNulls is false. And, all place that assign column value need to set noNulls to false when the value is NULL. > Vectorization: IF Statements, Comparisons, and more do not handle NULLs > correctly > - > > Key: HIVE-18622 > URL: https://issues.apache.org/jira/browse/HIVE-18622 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18622.03.patch, HIVE-18622.04.patch > > > > Many vector expression classes are setting noNulls to true which does not > work if the VRB is a scratch column being reused. The previous use may have > set noNulls to false and the isNull array will have some rows marked as NULL. > The result is wrong query results and sometimes NPEs (for BytesColumnVector). > So, many vector expressions need this: > {code:java} > // Carefully handle NULLs... > /* >* For better performance on LONG/DOUBLE we don't want the conditional >* statements inside the for loop. >*/ > outputColVector.noNulls = false; > {code} > And, vector expressions need to make sure the isNull array entry is set when > outputColVector.noNulls is false. > And, all place that assign column value need to set noNulls to false when the > value is NULL. > Almost all cases where noNulls is set to true are incorrect. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18622) Vectorization: IF Statements, Comparisons, and more do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18622: Status: Patch Available (was: In Progress) > Vectorization: IF Statements, Comparisons, and more do not handle NULLs > correctly > - > > Key: HIVE-18622 > URL: https://issues.apache.org/jira/browse/HIVE-18622 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18622.03.patch, HIVE-18622.04.patch > > > > Many vector expression classes are setting noNulls to true which does not > work if the VRB is a scratch column being reused. The previous use may have > set noNulls to false and the isNull array will have some rows marked as NULL. > The result is wrong query results and sometimes NPEs (for BytesColumnVector). > So, many vector expressions need this: > {code:java} > // Carefully handle NULLs... > /* >* For better performance on LONG/DOUBLE we don't want the conditional >* statements inside the for loop. >*/ > outputColVector.noNulls = false; > {code} > And, vector expressions need to make sure the isNull array entry is set when > outputColVector.noNulls is false. > And, all place that assign column value need to set noNulls to false when the > value is NULL. > Almost all cases where noNulls is set to true are incorrect. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18637) WorkloadManagent Event Summary leaving subscribedCounters and currentCounters fields empty
[ https://issues.apache.org/jira/browse/HIVE-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356743#comment-16356743 ] Hive QA commented on HIVE-18637: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 6e9b63e | | Default Java | 1.8.0_111 | | modules | C: ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9089/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > WorkloadManagent Event Summary leaving subscribedCounters and currentCounters > fields empty > -- > > Key: HIVE-18637 > URL: https://issues.apache.org/jira/browse/HIVE-18637 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18637.1.patch, HIVE-18637.2.patch > > > subscribedCounters and currentCounters values are empty when trigger results > in MOVE event > WorkloadManager Events Summary > {noformat} > INFO : { > "queryId" : "hive_20180205214449_d2955891-e3b2-4ac3-bca9-5d2a53feb8c0", > "queryStartTime" : 1517867089060, > "queryEndTime" : 1517867144341, > "queryCompleted" : true, > "queryWmEvents" : [ { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : "pool2", > "clusterPercent" : 30.0 > }, > "eventStartTimestamp" : 1517867094797, > "eventEndTimestamp" : 1517867094798, > "eventType" : "GET", > "elapsedTime" : 1 > }, { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : "pool1", > "clusterPercent" : 70.0 > }, > "eventStartTimestamp" : 1517867139886, > "eventEndTimestamp" : 1517867139887, > "eventType" : "MOVE", > "elapsedTime" : 1 > }, { > "w
[jira] [Commented] (HIVE-18647) Cannot create table: Unknown column 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID'
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356761#comment-16356761 ] Rui Li commented on HIVE-18647: --- Hi [~jcamachorodriguez], with the latest code, I hit a different issue: {noformat} 2018-02-08T18:13:54,913 ERROR [eeb906f4-bfb8-461f-ada9-fe1b3a8aa22c main] metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of 10) with error: javax.jdo.JDOException: Exception thrown when executing query : SELECT DISTINCT 'org.apache.hadoop.hive.metastore.model.MTable' AS `NUCLEUS_TYPE`,`A0`.`BUCKETING_VERSION`,`A0`.`CREATE_TIME`,`A0`.`LAST_ACCESS_TIME`,`A0`.`LOAD_IN_BUCKETED_TABLE`,`A0`.`OWNER`,`A0`.`RETENTION`,`A0`.`IS_REWRITE_ENABLED`,`A0`.`TBL_NAME`,`A0`.`TBL_TYPE`,`A0`.`TBL_ID` FROM `TBLS` `A0` LEFT OUTER JOIN `DBS` `B0` ON `A0`.`DB_ID` = `B0`.`DB_ID` WHERE `A0`.`TBL_NAME` = ? AND `B0`.`NAME` = ? at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:677) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:241) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1579) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1615) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:1333) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at com.sun.proxy.$Proxy36.getTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1922) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1462) {noformat} I re-initialized my metastore using the schema tool but the issue persists. > Cannot create table: Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' > --- > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch code and mysql as metastore. > Creating table hits this error: > {noformat} > 2018-02-07T22:04:55,438 ERROR [41f91bf4-bc49-4a73-baee-e2a1d79b8a4e main] > metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of > 10) with error: javax.jdo.JDODataStoreException: Insert of object > "org.apache.hadoop.hive.metastore.model.MTable@28d16af8" using statement > "INSERT INTO `TBLS` > (`TBL_ID`,`CREATE_TIME`,`CREATION_METADATA_MV_CREATION_METADATA_ID_OID`,`DB_ID`,`LAST_ACCESS_TIME`,`OWNER`,`RETENTION`,`IS_REWRITE_ENABLED`,`SD_ID`,`TBL_NAME`,`TBL_TYPE`,`VIEW_EXPANDED_TEXT`,`VIEW_ORIGINAL_TEXT`) > VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)" failed : Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' in 'field list' > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:729) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy36.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1506) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1412) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-18647) Cannot create table: Unknown column 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID'
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356761#comment-16356761 ] Rui Li edited comment on HIVE-18647 at 2/8/18 10:18 AM: Hi [~jcamachorodriguez], with the latest code, I hit a different issue: {noformat} 2018-02-08T18:13:54,913 ERROR [eeb906f4-bfb8-461f-ada9-fe1b3a8aa22c main] metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of 10) with error: javax.jdo.JDOException: Exception thrown when executing query : SELECT DISTINCT 'org.apache.hadoop.hive.metastore.model.MTable' AS `NUCLEUS_TYPE`,`A0`.`BUCKETING_VERSION`,`A0`.`CREATE_TIME`,`A0`.`LAST_ACCESS_TIME`,`A0`.`LOAD_IN_BUCKETED_TABLE`,`A0`.`OWNER`,`A0`.`RETENTION`,`A0`.`IS_REWRITE_ENABLED`,`A0`.`TBL_NAME`,`A0`.`TBL_TYPE`,`A0`.`TBL_ID` FROM `TBLS` `A0` LEFT OUTER JOIN `DBS` `B0` ON `A0`.`DB_ID` = `B0`.`DB_ID` WHERE `A0`.`TBL_NAME` = ? AND `B0`.`NAME` = ? at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:677) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:241) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1579) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1615) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:1333) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at com.sun.proxy.$Proxy36.getTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1922) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1462) ... NestedThrowablesStackTrace: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column 'A0.BUCKETING_VERSION' in 'field list' {noformat} I re-initialized my metastore using the schema tool but the issue persists. was (Author: lirui): Hi [~jcamachorodriguez], with the latest code, I hit a different issue: {noformat} 2018-02-08T18:13:54,913 ERROR [eeb906f4-bfb8-461f-ada9-fe1b3a8aa22c main] metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of 10) with error: javax.jdo.JDOException: Exception thrown when executing query : SELECT DISTINCT 'org.apache.hadoop.hive.metastore.model.MTable' AS `NUCLEUS_TYPE`,`A0`.`BUCKETING_VERSION`,`A0`.`CREATE_TIME`,`A0`.`LAST_ACCESS_TIME`,`A0`.`LOAD_IN_BUCKETED_TABLE`,`A0`.`OWNER`,`A0`.`RETENTION`,`A0`.`IS_REWRITE_ENABLED`,`A0`.`TBL_NAME`,`A0`.`TBL_TYPE`,`A0`.`TBL_ID` FROM `TBLS` `A0` LEFT OUTER JOIN `DBS` `B0` ON `A0`.`DB_ID` = `B0`.`DB_ID` WHERE `A0`.`TBL_NAME` = ? AND `B0`.`NAME` = ? at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:677) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:241) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1579) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1615) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:1333) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at com.sun.proxy.$Proxy36.getTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1922) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1462) {noformat} I re-initialized my metastore using the schema tool but the issue persists. > Cannot create table: Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' > --- > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch
[jira] [Commented] (HIVE-18350) load data should rename files consistent with insert statements
[ https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356776#comment-16356776 ] Rui Li commented on HIVE-18350: --- Hi [~djaiswal], with this change, I hit [this error|https://issues.apache.org/jira/browse/HIVE-18647?focusedCommentId=16356761&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16356761] when creating table. Could you please take a look? Thanks. > load data should rename files consistent with insert statements > --- > > Key: HIVE-18350 > URL: https://issues.apache.org/jira/browse/HIVE-18350 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18350.1.patch, HIVE-18350.10.patch, > HIVE-18350.11.patch, HIVE-18350.12.patch, HIVE-18350.13.patch, > HIVE-18350.14.patch, HIVE-18350.15.patch, HIVE-18350.16.patch, > HIVE-18350.2.patch, HIVE-18350.3.patch, HIVE-18350.4.patch, > HIVE-18350.5.patch, HIVE-18350.6.patch, HIVE-18350.7.patch, > HIVE-18350.8.patch, HIVE-18350.9.patch > > > Insert statements create files of format ending with _0, 0001_0 etc. > However, the load data uses the input file name. That results in inconsistent > naming convention which makes SMB joins difficult in some scenarios and may > cause trouble for other types of queries in future. > We need consistent naming convention. > For non-bucketed table, hive renames all the files regardless of how they > were named by the user. > For bucketed table, hive relies on user to name the files matching the > bucket in non-strict mode. Hive assumes that the data belongs to same bucket > in a file. In strict mode, loading bucketed table is disabled. > This will likely affect most of the tests which load data which is pretty > significant due to which it is further divided into two subtasks for smoother > merge. > For existing tables in customer database, it is recommended to reload > bucketed tables otherwise if customer tries to run SMB join and there is a > bucket for which there is no split, then there is a possibility of getting > incorrect results. However, this is not a regression as it would happen even > without the patch. > With this patch however, and reloading data, the results should be correct. > For non-bucketed tables and external tables, there is no difference in > behavior and reloading data is not needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18647) Cannot create table: Unknown column 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID'
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356778#comment-16356778 ] Rui Li commented on HIVE-18647: --- Seems related to HIVE-18350 > Cannot create table: Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' > --- > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch code and mysql as metastore. > Creating table hits this error: > {noformat} > 2018-02-07T22:04:55,438 ERROR [41f91bf4-bc49-4a73-baee-e2a1d79b8a4e main] > metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of > 10) with error: javax.jdo.JDODataStoreException: Insert of object > "org.apache.hadoop.hive.metastore.model.MTable@28d16af8" using statement > "INSERT INTO `TBLS` > (`TBL_ID`,`CREATE_TIME`,`CREATION_METADATA_MV_CREATION_METADATA_ID_OID`,`DB_ID`,`LAST_ACCESS_TIME`,`OWNER`,`RETENTION`,`IS_REWRITE_ENABLED`,`SD_ID`,`TBL_NAME`,`TBL_TYPE`,`VIEW_EXPANDED_TEXT`,`VIEW_ORIGINAL_TEXT`) > VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)" failed : Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' in 'field list' > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:729) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy36.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1506) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1412) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18637) WorkloadManagent Event Summary leaving subscribedCounters and currentCounters fields empty
[ https://issues.apache.org/jira/browse/HIVE-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356787#comment-16356787 ] Hive QA commented on HIVE-18637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909607/HIVE-18637.2.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 12995 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9089/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9089/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9089/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 20 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909607 - PreCommit-HIVE-Build > WorkloadManagent Event Summary leaving subscribedCounters and currentCounters > fields empty > -- > > Key: HIVE-18637 > URL: https://issues.apache.org/jira/browse/HIVE-18637 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18637.1.patch, HIVE-18637.2.patch > > > subscribedCounters and currentCounters values are empty when trigger results > in MOVE event > WorkloadManager Events Summary > {noformat} > INFO : { > "queryId" : "hive_20180205214449_d2955891-e3b2-4ac3-bca9-5d2a53feb8c0", > "queryStartTime" : 1517867089060, > "queryEndTime" : 1517867144341, > "queryCompleted" : true, > "queryWmEvents" : [ { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : "pool2", > "clusterPercent" : 30.0 > }, > "eventStartTimestamp" : 1517867094797, > "eventEndTimestamp" : 1517867094798, > "eventType" : "GET", > "elapsedTime" : 1 > }, { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : "pool1", > "clusterPercent" : 70.0 > }, > "eventStartTimestamp" : 1517867139886, > "eventEndTimestamp" : 1517867139887, > "eventType" : "MOVE", > "elapsedTime" : 1 > }, { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : null, > "clusterPercent" : 0.0 > }, > "eventStartTimestamp" : 1517867144360, > "eventEndTimestamp" : 1517867144360, > "eventType" : "RETURN", > "elapsedTime" : 0 > } ], > "appliedTriggers" : [ { > "name" : "too_large_write_triger", > "expression"
[jira] [Assigned] (HIVE-16496) Enhance asterisk expression (as in "select *") with EXCLUDE clause
[ https://issues.apache.org/jira/browse/HIVE-16496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikhil Harsoor reassigned HIVE-16496: - Assignee: Madhudeep Petwal > Enhance asterisk expression (as in "select *") with EXCLUDE clause > -- > > Key: HIVE-16496 > URL: https://issues.apache.org/jira/browse/HIVE-16496 > Project: Hive > Issue Type: Wish > Components: Parser >Reporter: Dudu Markovitz >Assignee: Madhudeep Petwal >Priority: Major > > support the following syntax: > {code} > select * exclude (a,b,e) from t > {code} > which for a table t with columns a,b,c,d,e would be equal to: > {code} > select c,d from t > {code} > Please note that the EXCLUDE clause relates directly to its preceding > asterisk. > Here are some useful use cases: > h3. use-case 1: join > {code} > select t1.* exclude (x), t2.* from t1 join t2 on t1.x=t2.x; > {code} > This supplies a very clean way to select all columns without getting > "Ambiguous column reference" and without the need to specify all the columns > of at least one of the tables. > > Currently, without this enhancement, the query would look something like this: > {code} > select a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,y,z,t2.* from t1 join t2 > on t1.x=t2.x; > {code} > Considering a table may hold hundreds or even thousands of column, this can > be come very ugly and error prone. > Often this require some scripting work. > h3. use-case 2: view > Creating views with all the tables columns except for some technical columns > > {code} > create myview as select * exclude (cre_ts,upd_ts) from t; > {code} > h3. use-case 3: row_number > Remove computational columns that are not needed in the final row-set, e.g. - > retrieve the last record for each customer > {code} > select * exclude (rn) > from (select t.* >,row_number() over (partition by customer_id order by ts desc) > as rn > fromt > ) t > > where rn = 1 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-16496) Enhance asterisk expression (as in "select *") with EXCLUDE clause
[ https://issues.apache.org/jira/browse/HIVE-16496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikhil Harsoor reassigned HIVE-16496: - Assignee: (was: Nikhil Harsoor) > Enhance asterisk expression (as in "select *") with EXCLUDE clause > -- > > Key: HIVE-16496 > URL: https://issues.apache.org/jira/browse/HIVE-16496 > Project: Hive > Issue Type: Wish > Components: Parser >Reporter: Dudu Markovitz >Priority: Major > > support the following syntax: > {code} > select * exclude (a,b,e) from t > {code} > which for a table t with columns a,b,c,d,e would be equal to: > {code} > select c,d from t > {code} > Please note that the EXCLUDE clause relates directly to its preceding > asterisk. > Here are some useful use cases: > h3. use-case 1: join > {code} > select t1.* exclude (x), t2.* from t1 join t2 on t1.x=t2.x; > {code} > This supplies a very clean way to select all columns without getting > "Ambiguous column reference" and without the need to specify all the columns > of at least one of the tables. > > Currently, without this enhancement, the query would look something like this: > {code} > select a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,y,z,t2.* from t1 join t2 > on t1.x=t2.x; > {code} > Considering a table may hold hundreds or even thousands of column, this can > be come very ugly and error prone. > Often this require some scripting work. > h3. use-case 2: view > Creating views with all the tables columns except for some technical columns > > {code} > create myview as select * exclude (cre_ts,upd_ts) from t; > {code} > h3. use-case 3: row_number > Remove computational columns that are not needed in the final row-set, e.g. - > retrieve the last record for each customer > {code} > select * exclude (rn) > from (select t.* >,row_number() over (partition by customer_id order by ts desc) > as rn > fromt > ) t > > where rn = 1 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18541) Secure HS2 web UI with PAM
[ https://issues.apache.org/jira/browse/HIVE-18541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleksiy Sayankin updated HIVE-18541: Status: In Progress (was: Patch Available) > Secure HS2 web UI with PAM > -- > > Key: HIVE-18541 > URL: https://issues.apache.org/jira/browse/HIVE-18541 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18541.1.patch, HIVE-18541.2.patch > > > Secure HS2 web UI with PAM. Add two new properties > * hive.server2.webui.use.pam > * Default value: false > * Description: If true, the HiveServer2 WebUI will be secured with PAM > * hive.server2.webui.pam.authenticator > * Default value: org.apache.hive.http.security.PamAuthenticator > * Description: Class for PAM authentication -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18541) Secure HS2 web UI with PAM
[ https://issues.apache.org/jira/browse/HIVE-18541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleksiy Sayankin updated HIVE-18541: Attachment: HIVE-18541.5.patch > Secure HS2 web UI with PAM > -- > > Key: HIVE-18541 > URL: https://issues.apache.org/jira/browse/HIVE-18541 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18541.1.patch, HIVE-18541.2.patch, > HIVE-18541.5.patch > > > Secure HS2 web UI with PAM. Add two new properties > * hive.server2.webui.use.pam > * Default value: false > * Description: If true, the HiveServer2 WebUI will be secured with PAM > * hive.server2.webui.pam.authenticator > * Default value: org.apache.hive.http.security.PamAuthenticator > * Description: Class for PAM authentication -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18541) Secure HS2 web UI with PAM
[ https://issues.apache.org/jira/browse/HIVE-18541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleksiy Sayankin updated HIVE-18541: Status: Patch Available (was: In Progress) > Secure HS2 web UI with PAM > -- > > Key: HIVE-18541 > URL: https://issues.apache.org/jira/browse/HIVE-18541 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18541.1.patch, HIVE-18541.2.patch, > HIVE-18541.5.patch > > > Secure HS2 web UI with PAM. Add two new properties > * hive.server2.webui.use.pam > * Default value: false > * Description: If true, the HiveServer2 WebUI will be secured with PAM > * hive.server2.webui.pam.authenticator > * Default value: org.apache.hive.http.security.PamAuthenticator > * Description: Class for PAM authentication -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18637) WorkloadManagent Event Summary leaving subscribedCounters and currentCounters fields empty
[ https://issues.apache.org/jira/browse/HIVE-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356816#comment-16356816 ] Hive QA commented on HIVE-18637: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 6e9b63e | | Default Java | 1.8.0_111 | | modules | C: ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9090/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > WorkloadManagent Event Summary leaving subscribedCounters and currentCounters > fields empty > -- > > Key: HIVE-18637 > URL: https://issues.apache.org/jira/browse/HIVE-18637 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18637.1.patch, HIVE-18637.2.patch > > > subscribedCounters and currentCounters values are empty when trigger results > in MOVE event > WorkloadManager Events Summary > {noformat} > INFO : { > "queryId" : "hive_20180205214449_d2955891-e3b2-4ac3-bca9-5d2a53feb8c0", > "queryStartTime" : 1517867089060, > "queryEndTime" : 1517867144341, > "queryCompleted" : true, > "queryWmEvents" : [ { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : "pool2", > "clusterPercent" : 30.0 > }, > "eventStartTimestamp" : 1517867094797, > "eventEndTimestamp" : 1517867094798, > "eventType" : "GET", > "elapsedTime" : 1 > }, { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : "pool1", > "clusterPercent" : 70.0 > }, > "eventStartTimestamp" : 1517867139886, > "eventEndTimestamp" : 1517867139887, > "eventType" : "MOVE", > "elapsedTime" : 1 > }, { > "w
[jira] [Commented] (HIVE-18637) WorkloadManagent Event Summary leaving subscribedCounters and currentCounters fields empty
[ https://issues.apache.org/jira/browse/HIVE-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356854#comment-16356854 ] Hive QA commented on HIVE-18637: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909607/HIVE-18637.2.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 12995 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[named_column_join] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9090/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9090/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9090/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 18 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909607 - PreCommit-HIVE-Build > WorkloadManagent Event Summary leaving subscribedCounters and currentCounters > fields empty > -- > > Key: HIVE-18637 > URL: https://issues.apache.org/jira/browse/HIVE-18637 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18637.1.patch, HIVE-18637.2.patch > > > subscribedCounters and currentCounters values are empty when trigger results > in MOVE event > WorkloadManager Events Summary > {noformat} > INFO : { > "queryId" : "hive_20180205214449_d2955891-e3b2-4ac3-bca9-5d2a53feb8c0", > "queryStartTime" : 1517867089060, > "queryEndTime" : 1517867144341, > "queryCompleted" : true, > "queryWmEvents" : [ { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : "pool2", > "clusterPercent" : 30.0 > }, > "eventStartTimestamp" : 1517867094797, > "eventEndTimestamp" : 1517867094798, > "eventType" : "GET", > "elapsedTime" : 1 > }, { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : "pool1", > "clusterPercent" : 70.0 > }, > "eventStartTimestamp" : 1517867139886, > "eventEndTimestamp" : 1517867139887, > "eventType" : "MOVE", > "elapsedTime" : 1 > }, { > "wmTezSessionInfo" : { > "sessionId" : "157866e5-ed1c-4abd-9846-db76b91c1124", > "poolName" : null, > "clusterPercent" : 0.0 > }, > "eventStartTimestamp" : 1517867144360, > "eventEndTimestamp" : 1517867144360, > "eventType" : "RETURN", > "elapsedTime" : 0 > } ], > "appliedTriggers" : [ { > "name" : "too_large_write_triger", > "expression" : { > "counterLimit" : { > "limit" : 10240, > "name" : "HDFS_BYTES_WRITTEN" > }, > "predicate" : "GREATER_THAN" > }, > "action" : { > "type"
[jira] [Commented] (HIVE-18359) Extend grouping set limits from int to long
[ https://issues.apache.org/jira/browse/HIVE-18359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356856#comment-16356856 ] Hive QA commented on HIVE-18359: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909703/HIVE-18359.11.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9091/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9091/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9091/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-02-08 12:06:41.406 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-9091/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-02-08 12:06:41.409 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 6e9b63e HIVE-18350 : load data should rename files consistent with insert statements. (Deepak Jaiswal, reviewed by Sergey Shelukhin and Ashutosh Chauhan) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 6e9b63e HIVE-18350 : load data should rename files consistent with insert statements. (Deepak Jaiswal, reviewed by Sergey Shelukhin and Ashutosh Chauhan) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-02-08 12:06:43.681 + rm -rf ../yetus + mkdir ../yetus + git gc + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9091/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/VirtualColumn.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveGroupingID.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveExpandDistinctAggregatesRule.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/HiveGBOpConvUtil.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/plan/GroupByDesc.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFGrouping.java: does not exist in index error: a/ql/src/test/queries/clientpositive/cte_1.q: does not exist in index error: a/ql/src/test/results/clientpositive/annotate_stats_groupby.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/annotate_stats_groupby2.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/cbo_rp_annotate_stats_groupby.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/groupby_cube1.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/groupby_cube_multi_gby.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/groupby_grouping_id3.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/groupby_grouping_sets1.q.out: does not exist in index error: a/ql/src/test
[jira] [Commented] (HIVE-18448) Drop Support For Indexes From Apache Hive
[ https://issues.apache.org/jira/browse/HIVE-18448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356858#comment-16356858 ] Hive QA commented on HIVE-18448: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909435/HIVE-18448.01wip02.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9092/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9092/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9092/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-02-08 12:09:04.411 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-9092/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-02-08 12:09:04.414 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 6e9b63e HIVE-18350 : load data should rename files consistent with insert statements. (Deepak Jaiswal, reviewed by Sergey Shelukhin and Ashutosh Chauhan) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 6e9b63e HIVE-18350 : load data should rename files consistent with insert statements. (Deepak Jaiswal, reviewed by Sergey Shelukhin and Ashutosh Chauhan) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-02-08 12:09:05.048 + rm -rf ../yetus + mkdir ../yetus + git gc + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9092/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestDDLWithRemoteMetastoreSecondNamenode.java:30 Falling back to three-way merge... Applied patch to 'itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestDDLWithRemoteMetastoreSecondNamenode.java' with conflicts. error: patch failed: itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java:95 Falling back to three-way merge... Applied patch to 'itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java' with conflicts. error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java:52 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java' cleanly. error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java:209 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java' cleanly. error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java:65 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java' cleanly. error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java:72 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/TaskCompiler.java' cleanly. error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java:19 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java' with conflicts. error: patch failed: ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java:35 Falling back to three-way merge... Applied patch to 'ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java' with conflicts. Going to apply patch with: git apply -p0 error: patch failed: itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestDDLWithRemoteMetastoreSecondName
[jira] [Commented] (HIVE-18646) Update errata.txt for HIVE-18617
[ https://issues.apache.org/jira/browse/HIVE-18646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356871#comment-16356871 ] Hive QA commented on HIVE-18646: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 51s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 6e9b63e | | modules | C: . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9093/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Update errata.txt for HIVE-18617 > > > Key: HIVE-18646 > URL: https://issues.apache.org/jira/browse/HIVE-18646 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Trivial > Attachments: HIVE-18646.1.patch > > > HIVE-18617 was committed as HIVE-18671. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18580) Create tests to cover exchange partitions
[ https://issues.apache.org/jira/browse/HIVE-18580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356915#comment-16356915 ] Peter Vary commented on HIVE-18580: --- +1 > Create tests to cover exchange partitions > - > > Key: HIVE-18580 > URL: https://issues.apache.org/jira/browse/HIVE-18580 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18580.1.patch, HIVE-18580.2.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - int Partition exchange_partition(Map, String, String, > String, String) > - List Partition exchange_partition(Map, String, > String, String, String){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18646) Update errata.txt for HIVE-18617
[ https://issues.apache.org/jira/browse/HIVE-18646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356933#comment-16356933 ] Hive QA commented on HIVE-18646: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909615/HIVE-18646.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 12980 tests executed *Failed tests:* {noformat} TestMiniLlapCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=148) [mapreduce2.q,orc_llap_counters1.q,bucket6.q,insert_into1.q,empty_dir_in_table.q,parquet_struct_type_vectorization.q,orc_merge1.q,parquet_types_vectorization.q,orc_merge_diff_fs.q,llap_stats.q,llapdecider.q,llap_nullscan.q,orc_ppd_basic.q,rcfile_merge4.q,orc_merge3.q] org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=61) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.TestAcidTableSetup.testTransactionalValidation (batchId=223) org.apache.hadoop.hive.metastore.client.TestDropPartitions.testDropPartition[Embedded] (batchId=206) org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9093/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9093/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9093/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 23 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909615 - PreCommit-HIVE-Build > Update errata.txt for HIVE-18617 > > > Key: HIVE-18646 > URL: https://issues.apache.org/jira/browse/HIVE-18646 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Trivial > Attachments: HIVE-18646.1.patch > > > HIVE-18617 was committed as HIVE-18671. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-15353) Metastore throws NPE if StorageDescriptor.cols is null
[ https://issues.apache.org/jira/browse/HIVE-15353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356947#comment-16356947 ] Peter Vary commented on HIVE-15353: --- Hi [~erwaman], Recently we created several MetaStore API tests. I think creating a table with null cols is tested here: [https://github.com/apache/hive/blob/6e9b63e48b4f34ba26a6eefb354b0c94ee82256c/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestTablesCreateDropAlterTruncate.java#L411] So most probably it is not possible to create a table like this anymore (at least not in 3.0.0 :) ). Thanks, Peter > Metastore throws NPE if StorageDescriptor.cols is null > -- > > Key: HIVE-15353 > URL: https://issues.apache.org/jira/browse/HIVE-15353 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0, 2.2.0 >Reporter: Anthony Hsu >Assignee: Anthony Hsu >Priority: Major > Attachments: HIVE-15353.1.patch, HIVE-15353.2.patch, > HIVE-15353.3.patch, HIVE-15353.4.patch, HIVE-15353.5.patch > > > When using the HiveMetaStoreClient API directly to talk to the metastore, you > get NullPointerExceptions when StorageDescriptor.cols is null in the > Table/Partition object in the following calls: > * create_table > * alter_table > * alter_partition > Calling add_partition with StorageDescriptor.cols set to null causes null to > be stored in the metastore database and subsequent calls to alter_partition > for that partition to fail with an NPE. > Null checks should be added to eliminate the NPEs in the metastore. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18511) Fix generated checkstyle errors
[ https://issues.apache.org/jira/browse/HIVE-18511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-18511: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks [~ychena] for the review! > Fix generated checkstyle errors > --- > > Key: HIVE-18511 > URL: https://issues.apache.org/jira/browse/HIVE-18511 > Project: Hive > Issue Type: Sub-task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18511.2.patch, HIVE-18511.patch > > > HIVE-18510 identified, that checkstyle was not running for test sources. > After running checkstyle several errors are identified -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18655) Apache hive 2.1.1 on Apache Spark 2.0
[ https://issues.apache.org/jira/browse/HIVE-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] AbdulMateen updated HIVE-18655: --- Description: |Hi, when connecting my beeline in hive it is not able to create spark client. {{select count(*) from student; Query ID = hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= }} Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' \{{FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }} Installed spark prebuilt 2.0 one in standalone cluster mode My hive-site.xml – placed in spark/conf too and removed the hive jars in hdfs path {{ spark.master yarn Spark Master URL spark.eventLog.enabled true Spark Event Log spark.eventLog.dir hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging Spark event log folder spark.executor.memory 512m Spark executor memory spark.serializer org.apache.spark.serializer.KryoSerializer Spark serializer spark.yarn.jars hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/* spark.submit.deployMode cluster Spark Master URL yarn.nodemanager.resource.memory-mb 40960 yarn.scheduler.minimum-allocation-mb 2048 yarn.scheduler.maximum-allocation-mb 8192 }}| was: Hi when connecting my beeline in hive it is not able to create spark client {{select count(*) from student; Query ID = hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces=}} {{FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1)}}{{}} |Hi when connecting my beeline in hive it is not able to create spark client {{select count(*) from student; Query ID = hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= }} Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' {{FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }} Installed spark prebuilt 2.0 one in standalone cluster mode My hive-site.xml -- placed in spark/conf too and removed the hive jars in hdfs path {{ spark.master yarn Spark Master URL spark.eventLog.enabled true Spark Event Log spark.eventLog.dir hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging Spark event log folder spark.executor.memory 512m Spark executor memory spark.serializer org.apache.spark.serializer.KryoSerializer Spark serializer spark.yarn.jars hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/* spark.submit.deployMode cluster Spark Master URL yarn.nodemanager.resource.memory-mb 40960 yarn.scheduler.minimum-allocation-mb 2048 yarn.scheduler.maximum-allocation-mb 8192 }}| > Apache hive 2.1.1 on Apache Spark 2.0 > - > > Key: HIVE-18655 > URL: https://issues.apache.org/jira/browse/HIVE-18655 > Project: Hive > Issue Type: Bug > Components: Hive, HiveServer2, Spark >Affects Versions: 2.1.1 > Environment: apache hive -2.1.1 > apache spark - 2.0 - prebulit version (removed hive jars) > apache hadoop -2.8 >Reporter: AbdulMateen >Priority: Blocker > > > |Hi, > > when connecting my beeline in hive it is not able to create spark client. > > {{select count(
[jira] [Updated] (HIVE-18655) Apache hive 2.1.1 on Apache Spark 2.0
[ https://issues.apache.org/jira/browse/HIVE-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] AbdulMateen updated HIVE-18655: --- Description: |Hi, when connecting my beeline in hive it is not able to create spark client. {{select count(*) from student; Query ID = hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= }} Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(*Failed to create spark client*.)' { {FAILED: Execution Error, return code 1 from {color:#FF}org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }{color} } Installed spark prebuilt 2.0 one in standalone cluster mode My hive-site.xml – placed in spark/conf too and removed the hive jars in hdfs path {{ spark.master yarn Spark Master URL spark.eventLog.enabled true Spark Event Log spark.eventLog.dir hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging Spark event log folder spark.executor.memory 512m Spark executor memory spark.serializer org.apache.spark.serializer.KryoSerializer Spark serializer spark.yarn.jars hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/* spark.submit.deployMode cluster Spark Master URL yarn.nodemanager.resource.memory-mb 40960 yarn.scheduler.minimum-allocation-mb 2048 yarn.scheduler.maximum-allocation-mb 8192 }}| was: |Hi, when connecting my beeline in hive it is not able to create spark client. {{select count(*) from student; Query ID = hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 Launching Job 1 out of 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= }} Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' \{{FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }} Installed spark prebuilt 2.0 one in standalone cluster mode My hive-site.xml – placed in spark/conf too and removed the hive jars in hdfs path {{ spark.master yarn Spark Master URL spark.eventLog.enabled true Spark Event Log spark.eventLog.dir hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging Spark event log folder spark.executor.memory 512m Spark executor memory spark.serializer org.apache.spark.serializer.KryoSerializer Spark serializer spark.yarn.jars hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/* spark.submit.deployMode cluster Spark Master URL yarn.nodemanager.resource.memory-mb 40960 yarn.scheduler.minimum-allocation-mb 2048 yarn.scheduler.maximum-allocation-mb 8192 }}| > Apache hive 2.1.1 on Apache Spark 2.0 > - > > Key: HIVE-18655 > URL: https://issues.apache.org/jira/browse/HIVE-18655 > Project: Hive > Issue Type: Bug > Components: Hive, HiveServer2, Spark >Affects Versions: 2.1.1 > Environment: apache hive -2.1.1 > apache spark - 2.0 - prebulit version (removed hive jars) > apache hadoop -2.8 >Reporter: AbdulMateen >Priority: Blocker > > > |Hi, > > when connecting my beeline in hive it is not able to create spark client. > > {{select count(*) from student; Query ID = > hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 > Launching Job 1 out of 1 In order to change the average load for a reducer > (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to > limit the maximum number of reducers: set hive.exec.reducers.max= In > order to set a constant number of reducers: set > mapreduce.job.reduces= }} > Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(*Failed to create spark > client*.)' > { {FAILED: Execution Error, return code 1 from > {color:#FF}org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error > while p
[jira] [Commented] (HIVE-18567) ObjectStore.getPartitionNamesNoTxn doesn't handle max param properly
[ https://issues.apache.org/jira/browse/HIVE-18567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356970#comment-16356970 ] Adam Szita commented on HIVE-18567: --- Thanks for reviewing and committing [~spena]. > ObjectStore.getPartitionNamesNoTxn doesn't handle max param properly > > > Key: HIVE-18567 > URL: https://issues.apache.org/jira/browse/HIVE-18567 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18567.0.patch, HIVE-18567.1.patch > > > As per [this HMS API test > case|https://github.com/apache/hive/commit/fa0a8d27d4149cc5cc2dbb49d8eb6b03f46bc279#diff-25c67d898000b53e623a6df9221aad5dR1044] > listing partition names doesn't check tha max param against > MetaStoreConf.LIMIT_PARTITION_REQUEST (as other methods do by > checkLimitNumberOfPartitionsByFilter), and also behaves differently on max=0 > setting compared to other methods. > We should bring this into consistency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18604) DropDatabase cascade fails when there is an index in the DB
[ https://issues.apache.org/jira/browse/HIVE-18604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Szita updated HIVE-18604: -- Attachment: HIVE-18604.0.patch > DropDatabase cascade fails when there is an index in the DB > --- > > Key: HIVE-18604 > URL: https://issues.apache.org/jira/browse/HIVE-18604 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-18604.0.patch > > > As seen in [HMS API > test|https://github.com/apache/hive/blob/master/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestDatabases.java#L452] > dropping database (even with cascade) is failing when an index exists in the > corresponding database, throwing MetaException: > {code:java} > MetaException(message:Exception thrown flushing changes to datastore > ) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:208) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy35.drop_table_with_environment_context(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:2495) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1092) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1007) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:859) > at > org.apache.hadoop.hive.metastore.client.TestDatabases.testDropDatabaseWithIndexCascade(TestDatabases.java:470) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at org.junit.runners.Suite.runChild(Suite.java:127) > at org.junit.runners.Suite.runChild(Suite.java:26) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at org.junit.runner.JUnitCore.run(JUnitCore.java:160) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > Caused by: javax.jdo.JDODataStoreException: Exception thrown flushing changes > to datastore > NestedThrowables: > java.sql.BatchUpdateException: DELETE on table 'TBLS' caused a violation of > foreign key constraint 'IDXS_FK1' for key (2). The statement has been rolled > back. > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171) > at > org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:745)
[jira] [Commented] (HIVE-18604) DropDatabase cascade fails when there is an index in the DB
[ https://issues.apache.org/jira/browse/HIVE-18604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356973#comment-16356973 ] Adam Szita commented on HIVE-18604: --- So it turns out that this issue was only seen in our API tests. That's because when index tables were created for the test cases their "type" fields were not set as {{TableType.INDEX_TABLE}}. This is something Hive already does on HS2 side. I think if we are to use HMS in standalone mode, API should make sure that we cannot create indexes if the user specifies a non index-typed table. I propose we throw an exception instead (see [^HIVE-18604.0.patch]). The rest of the changes in my patch are related test fixes. [~pvary], [~kuczoram] pls let me know what you think. > DropDatabase cascade fails when there is an index in the DB > --- > > Key: HIVE-18604 > URL: https://issues.apache.org/jira/browse/HIVE-18604 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-18604.0.patch > > > As seen in [HMS API > test|https://github.com/apache/hive/blob/master/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestDatabases.java#L452] > dropping database (even with cascade) is failing when an index exists in the > corresponding database, throwing MetaException: > {code:java} > MetaException(message:Exception thrown flushing changes to datastore > ) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:208) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy35.drop_table_with_environment_context(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:2495) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1092) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1007) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:859) > at > org.apache.hadoop.hive.metastore.client.TestDatabases.testDropDatabaseWithIndexCascade(TestDatabases.java:470) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at org.junit.runners.Suite.runChild(Suite.java:127) > at org.junit.runners.Suite.runChild(Suite.java:26) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at org.junit.runner.JUnitCore.run(JUnitCore.java:160) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) > at com.intellij.rt.execution.juni
[jira] [Updated] (HIVE-18604) DropDatabase cascade fails when there is an index in the DB
[ https://issues.apache.org/jira/browse/HIVE-18604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Szita updated HIVE-18604: -- Status: Patch Available (was: Open) > DropDatabase cascade fails when there is an index in the DB > --- > > Key: HIVE-18604 > URL: https://issues.apache.org/jira/browse/HIVE-18604 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-18604.0.patch > > > As seen in [HMS API > test|https://github.com/apache/hive/blob/master/standalone-metastore/src/test/java/org/apache/hadoop/hive/metastore/client/TestDatabases.java#L452] > dropping database (even with cascade) is failing when an index exists in the > corresponding database, throwing MetaException: > {code:java} > MetaException(message:Exception thrown flushing changes to datastore > ) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:208) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy35.drop_table_with_environment_context(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:2495) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1092) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1007) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropDatabase(HiveMetaStoreClient.java:859) > at > org.apache.hadoop.hive.metastore.client.TestDatabases.testDropDatabaseWithIndexCascade(TestDatabases.java:470) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at org.junit.runners.Suite.runChild(Suite.java:127) > at org.junit.runners.Suite.runChild(Suite.java:26) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at org.junit.runner.JUnitCore.run(JUnitCore.java:160) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > Caused by: javax.jdo.JDODataStoreException: Exception thrown flushing changes > to datastore > NestedThrowables: > java.sql.BatchUpdateException: DELETE on table 'TBLS' caused a violation of > foreign key constraint 'IDXS_FK1' for key (2). The statement has been rolled > back. > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171) > at > org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.jav
[jira] [Commented] (HIVE-15995) Syncing metastore table with serde schema
[ https://issues.apache.org/jira/browse/HIVE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356988#comment-16356988 ] Hive QA commented on HIVE-15995: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 47s{color} | {color:red} ql: The patch generated 4 new + 825 unchanged - 0 fixed = 829 total (was 825) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 57s{color} | {color:red} root: The patch generated 4 new + 825 unchanged - 0 fixed = 829 total (was 825) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 6e9b63e | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9094/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9094/yetus/diff-checkstyle-root.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-9094/yetus/whitespace-eol.txt | | modules | C: ql . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9094/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Syncing metastore table with serde schema > - > > Key: HIVE-15995 > URL: https://issues.apache.org/jira/browse/HIVE-15995 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 1.2.1, 2.1.0, 3.0.0 >Reporter: Michal Ferlinski >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-15995.1.patch, HIVE-15995.patch, cx1.avsc, cx2.avsc > > > Hive enables table schema evolution via properties. For avro e.g. we can > alter the 'avro.schema.url' property to update table schema to the next > version. Updating properties however doesn't affect column list stored in > metastore DB so the table is not in the newest version when returned from > metastore API. This is problem for tools working with metastore (e.g. Presto). > To solve this issue I suggest to introduce new DDL statement syncing > metas
[jira] [Commented] (HIVE-15995) Syncing metastore table with serde schema
[ https://issues.apache.org/jira/browse/HIVE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357013#comment-16357013 ] Hive QA commented on HIVE-15995: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909622/HIVE-15995.1.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 12997 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestGetPartitions.testGetPartitionWithAuthInfoNoDbName[Embedded] (batchId=206) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9094/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9094/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9094/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 23 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909622 - PreCommit-HIVE-Build > Syncing metastore table with serde schema > - > > Key: HIVE-15995 > URL: https://issues.apache.org/jira/browse/HIVE-15995 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 1.2.1, 2.1.0, 3.0.0 >Reporter: Michal Ferlinski >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-15995.1.patch, HIVE-15995.patch, cx1.avsc, cx2.avsc > > > Hive enables table schema evolution via properties. For avro e.g. we can > alter the 'avro.schema.url' property to update table schema to the next > version. Updating properties however doesn't affect column list stored in > metastore DB so the table is not in the newest version when returned from > metastore API. This is problem for tools working with metastore (e.g. Presto). > To solve this issue I suggest to introduce new DDL statement syncing > metastore columns with those from serde: > {code} > ALTER TABLE user_test1 UPDATE COLUMNS > {code} > Note that this is format independent solution. > To reproduce, follow the instructions below: > - Create table based on avro schema version 1 (cxv1.avsc) > {code} > CREATE EXTERNAL TABLE user_test1 > PARTITIONED BY (dt string) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' > LOCATION > '/tmp/schema-evolution/user_test1' > TBLPROPERTIES ('avro.schema.url'='/tmp/schema
[jira] [Commented] (HIVE-18580) Create tests to cover exchange partitions
[ https://issues.apache.org/jira/browse/HIVE-18580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357042#comment-16357042 ] Hive QA commented on HIVE-18580: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} standalone-metastore: The patch generated 30 new + 0 unchanged - 0 fixed = 30 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / b8fdd13 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9095/yetus/diff-checkstyle-standalone-metastore.txt | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9095/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create tests to cover exchange partitions > - > > Key: HIVE-18580 > URL: https://issues.apache.org/jira/browse/HIVE-18580 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18580.1.patch, HIVE-18580.2.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - int Partition exchange_partition(Map, String, String, > String, String) > - List Partition exchange_partition(Map, String, > String, String, String){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17063) insert overwrite partition onto a external table fail when drop partition first
[ https://issues.apache.org/jira/browse/HIVE-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357055#comment-16357055 ] Wang Haihua commented on HIVE-17063: ok, Thanks [~djaiswal] for your solution. > insert overwrite partition onto a external table fail when drop partition > first > --- > > Key: HIVE-17063 > URL: https://issues.apache.org/jira/browse/HIVE-17063 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.2, 2.1.1, 2.2.0 >Reporter: Wang Haihua >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-17063.1.patch, HIVE-17063.2.patch, > HIVE-17063.3.patch, HIVE-17063.4.patch > > > The default value of {{hive.exec.stagingdir}} which is a relative path, and > also drop partition on a external table will not clear the real data. As a > result, insert overwrite partition twice will happen to fail because of the > target data to be moved has > already existed. > This happened when we reproduce partition data onto a external table. > I see the target data will not be cleared only when {{immediately generated > data}} is child of {{the target data directory}}, so my proposal is trying > to clear target file already existed finally whe doing rename {{immediately > generated data}} into {{the target data directory}} > Operation reproduced: > {code} > create external table insert_after_drop_partition(key string, val string) > partitioned by (insertdate string); > from src insert overwrite table insert_after_drop_partition partition > (insertdate='2008-01-01') select *; > alter table insert_after_drop_partition drop partition > (insertdate='2008-01-01'); > from src insert overwrite table insert_after_drop_partition partition > (insertdate='2008-01-01') select *; > {code} > Stack trace: > {code} > 2017-07-09T08:32:05,212 ERROR [f3bc51c8-2441-4689-b1c1-d60aef86c3aa main] > exec.Task: Failed with exception java.io.IOException: rename for src path: > pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/.hive-staging_hive_2017-07-09_08-32-03_840_4046825276907030554-1/-ext-1/00_0 > to dest > path:pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/00_0 > returned false > org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: rename > for src path: > pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/.hive-staging_hive_2017-07-09_08-32-03_840_4046825276907030554-1/-ext-1/00_0 > to dest > path:pfile:/data/haihua/official/hive/itests/qtest/target/warehouse/insert_after_drop_partition/insertdate=2008-01-01/00_0 > returned false > at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2992) > at > org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3248) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1532) > at > org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1461) > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:498) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1137) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:) > at > org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:120) > at > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_after_drop_partition(TestCliDriver.java:103) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.
[jira] [Updated] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table
[ https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-18553: Attachment: HIVE-18553.6.patch > VectorizedParquetReader fails after adding a new column to table > > > Key: HIVE-18553 > URL: https://issues.apache.org/jira/browse/HIVE-18553 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0, 2.4.0, 2.3.2 >Reporter: Vihang Karajgaonkar >Assignee: Ferdinand Xu >Priority: Major > Attachments: HIVE-18553.2.patch, HIVE-18553.3.patch, > HIVE-18553.4.patch, HIVE-18553.5.patch, HIVE-18553.6.patch, HIVE-18553.patch, > test_result_based_on_HIVE-18553.xlsx > > > VectorizedParquetReader throws an exception when trying to reading from a > parquet table on which new columns are added. Steps to reproduce below: > {code} > 0: jdbc:hive2://localhost:1/default> desc test_p; > +---++--+ > | col_name | data_type | comment | > +---++--+ > | t1| tinyint| | > | t2| tinyint| | > | i1| int| | > | i2| int| | > +---++--+ > 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none; > 0: jdbc:hive2://localhost:1/default> set > hive.vectorized.execution.enabled=true; > 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts > timestamp); > 0: jdbc:hive2://localhost:1/default> select * from test_p; > Error: Error while processing statement: FAILED: Execution Error, return code > 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2) > {code} > Following exception is seen in the logs > {code} > Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the > store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] INT32] 3 > at > org.apache.parquet.hadoop.ColumnChunkPageReadStore.getPageReader(ColumnChunkPageReadStore.java:160) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:479) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:432) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:393) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:345) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:88) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199) > ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) > ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) > ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) > ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] >
[jira] [Commented] (HIVE-18580) Create tests to cover exchange partitions
[ https://issues.apache.org/jira/browse/HIVE-18580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357113#comment-16357113 ] Hive QA commented on HIVE-18580: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909627/HIVE-18580.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 13151 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=241) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=180) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=222) org.apache.hadoop.hive.metastore.client.TestAddPartitions.testAddPartitionsNullColTypeInSd[Embedded] (batchId=206) org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=283) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=257) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.hcatalog.common.TestHiveClientCache.testCloseAllClients (batchId=200) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=235) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=235) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9095/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9095/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9095/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 24 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909627 - PreCommit-HIVE-Build > Create tests to cover exchange partitions > - > > Key: HIVE-18580 > URL: https://issues.apache.org/jira/browse/HIVE-18580 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18580.1.patch, HIVE-18580.2.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - int Partition exchange_partition(Map, String, String, > String, String) > - List Partition exchange_partition(Map, String, > String, String, String){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table
[ https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357165#comment-16357165 ] Hive QA commented on HIVE-18553: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 33 new + 214 unchanged - 84 fixed = 247 total (was 298) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / b8fdd13 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9096/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9096/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > VectorizedParquetReader fails after adding a new column to table > > > Key: HIVE-18553 > URL: https://issues.apache.org/jira/browse/HIVE-18553 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0, 2.4.0, 2.3.2 >Reporter: Vihang Karajgaonkar >Assignee: Ferdinand Xu >Priority: Major > Attachments: HIVE-18553.2.patch, HIVE-18553.3.patch, > HIVE-18553.4.patch, HIVE-18553.5.patch, HIVE-18553.6.patch, HIVE-18553.patch, > test_result_based_on_HIVE-18553.xlsx > > > VectorizedParquetReader throws an exception when trying to reading from a > parquet table on which new columns are added. Steps to reproduce below: > {code} > 0: jdbc:hive2://localhost:1/default> desc test_p; > +---++--+ > | col_name | data_type | comment | > +---++--+ > | t1| tinyint| | > | t2| tinyint| | > | i1| int| | > | i2| int| | > +---++--+ > 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none; > 0: jdbc:hive2://localhost:1/default> set > hive.vectorized.execution.enabled=true; > 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts > timestamp); > 0: jdbc:hive2://localhost:1/default> select * from test_p; > Error: Error while processing statement: FAILED: Execution Error, return code > 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2) > {code} > Following exception is seen in the logs > {code} > Caused by: java.lang.Il
[jira] [Updated] (HIVE-18448) Drop Support For Indexes From Apache Hive
[ https://issues.apache.org/jira/browse/HIVE-18448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-18448: Attachment: HIVE-18448.01wip03.patch > Drop Support For Indexes From Apache Hive > - > > Key: HIVE-18448 > URL: https://issues.apache.org/jira/browse/HIVE-18448 > Project: Hive > Issue Type: Improvement > Components: Indexing >Reporter: BELUGA BEHR >Assignee: Zoltan Haindrich >Priority: Minor > Attachments: HIVE-18448.01wip02.patch, HIVE-18448.01wip03.patch > > > If a user needs to look up a small subset of records quickly, they can use > Apache HBase, if they need fast retrieval of larger sets of data, or fast > joins, aggregations, they can use Apache Impala. It seems to me that Hive > indexes do not serve much of a role in the future of Hive. > Even without moving workloads to other products, columnar file formats with > their statistics achieve similar goals as Hive indexes. > Please consider dropping Indexes from the Apache Hive project. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18238) Driver execution may not have configuration changing sideeffects
[ https://issues.apache.org/jira/browse/HIVE-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-18238: Attachment: HIVE-18238.10.patch > Driver execution may not have configuration changing sideeffects > - > > Key: HIVE-18238 > URL: https://issues.apache.org/jira/browse/HIVE-18238 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-18238.01wip01.patch, HIVE-18238.02.patch, > HIVE-18238.03.patch, HIVE-18238.04.patch, HIVE-18238.04wip01.patch, > HIVE-18238.07.patch, HIVE-18238.08.patch, HIVE-18238.09.patch, > HIVE-18238.10.patch > > > {{Driver}} executes sql statements which use "hiveconf" settings; > but the {{Driver}} itself may *not* change the configuration... > I've found an example; which shows how hazardous this is... > {code} > set hive.mapred.mode=strict; > select "${hiveconf:hive.mapred.mode}"; > create table t (a int); > analyze table t compute statistics; > select "${hiveconf:hive.mapred.mode}"; > {code} > currently; the last select returns {{nonstrict}} because of > [this|https://github.com/apache/hive/blob/7ddd915bf82a68c8ab73b0c4ca409f1a6d43d227/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L1696] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table
[ https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357248#comment-16357248 ] Hive QA commented on HIVE-18553: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909816/HIVE-18553.6.patch {color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 118 failed/errored test(s), 12954 tests executed *Failed tests:* {noformat} TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=107) [join_cond_pushdown_unqual4.q,union_remove_7.q,join13.q,join_vc.q,groupby_cube1.q,parquet_vectorization_2.q,bucket_map_join_spark2.q,sample3.q,smb_mapjoin_19.q,union23.q,union.q,union31.q,cbo_udf_udaf.q,ptf_decimal.q,bucketmapjoin2.q] TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=116) [skewjoinopt3.q,skewjoinopt19.q,timestamp_comparison.q,join_merge_multi_expressions.q,union5.q,insert_into1.q,vectorization_4.q,parquet_vectorization_10.q,vector_left_outer_join.q,decimal_1_1.q,semijoin.q,skewjoinopt9.q,smb_mapjoin_3.q,stats10.q,rcfile_bigdata.q] TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=144) [groupby2_noskew_multi_distinct.q,load_dyn_part12.q,scriptfile1.q,join15.q,auto_join17.q,subquery_multiinsert.q,join_hive_626.q,tez_join_tests.q,parquet_vectorization_16.q,auto_join21.q,join_view.q,join_cond_pushdown_4.q,vectorization_0.q,union_null.q,auto_join3.q] org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_complex_types_vectorization] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_map_type_vectorization] (batchId=85) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_types_vectorization] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_10] (batchId=23) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_11] (batchId=38) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_12] (batchId=23) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_13] (batchId=52) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_14] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_15] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_16] (batchId=82) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_17] (batchId=29) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_2] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_5] (batchId=71) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_6] (batchId=42) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_7] (batchId=85) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_8] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_9] (batchId=30) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_not] (batchId=79) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_part] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_part_project] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_part_varchar] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[schema_evol_par_vec_table_non_dictionary_encoding] (batchId=50) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_parquet_types] (batchId=67) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_complex_types_vectorization] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_map_type_vectorization] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_types_vectorization] (batchId=148) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocal
[jira] [Assigned] (HIVE-18034) Improving logging with HoS executors spend lots of time in GC
[ https://issues.apache.org/jira/browse/HIVE-18034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar reassigned HIVE-18034: --- Assignee: Sahil Takiar > Improving logging with HoS executors spend lots of time in GC > - > > Key: HIVE-18034 > URL: https://issues.apache.org/jira/browse/HIVE-18034 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > > There are times when Spark will spend lots of time doing GC. The Spark > History UI shows a bunch of red flags when too much time is spent in GC. It > would be nice if those warnings are propagated to Hive. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18655) Apache hive 2.1.1 on Apache Spark 2.0
[ https://issues.apache.org/jira/browse/HIVE-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357282#comment-16357282 ] Sahil Takiar commented on HIVE-18655: - Hive 2.1.1 was only ever tested with Spark 1.6.0 so its unlikely to work with Spark 2.0. If you want to use Spark 2.0.0 try Hive 2.3.x which has been tested with Spark 2.0.0. > Apache hive 2.1.1 on Apache Spark 2.0 > - > > Key: HIVE-18655 > URL: https://issues.apache.org/jira/browse/HIVE-18655 > Project: Hive > Issue Type: Bug > Components: Hive, HiveServer2, Spark >Affects Versions: 2.1.1 > Environment: apache hive -2.1.1 > apache spark - 2.0 - prebulit version (removed hive jars) > apache hadoop -2.8 >Reporter: AbdulMateen >Priority: Blocker > > > |Hi, > > when connecting my beeline in hive it is not able to create spark client. > > {{select count(*) from student; Query ID = > hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 > Launching Job 1 out of 1 In order to change the average load for a reducer > (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to > limit the maximum number of reducers: set hive.exec.reducers.max= In > order to set a constant number of reducers: set > mapreduce.job.reduces= }} > Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(*Failed to create spark > client*.)' > { {FAILED: Execution Error, return code 1 from > {color:#FF}org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error > while processing statement: FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }{color} > } > > > Installed spark prebuilt 2.0 one in standalone cluster mode > > My hive-site.xml – placed in spark/conf too and removed the hive jars in > hdfs path > > {{ spark.master yarn > Spark Master URL > spark.eventLog.enabled true Spark > Event Log > spark.eventLog.dir > hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging > Spark event log folder > spark.executor.memory 512m Spark > executor memory > spark.serializer > org.apache.spark.serializer.KryoSerializer Spark > serializer spark.yarn.jars > hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/* > spark.submit.deployMode cluster > Spark Master URL > My yarn-site.xml > > {{ yarn.nodemanager.resource.memory-mb > 40960 > yarn.scheduler.minimum-allocation-mb 2048 > yarn.scheduler.maximum-allocation-mb > 8192 }}| > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18541) Secure HS2 web UI with PAM
[ https://issues.apache.org/jira/browse/HIVE-18541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357287#comment-16357287 ] Hive QA commented on HIVE-18541: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} service in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s{color} | {color:red} service in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 19s{color} | {color:red} service in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} common: The patch generated 22 new + 439 unchanged - 0 fixed = 461 total (was 439) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} service: The patch generated 9 new + 20 unchanged - 0 fixed = 29 total (was 20) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / b8fdd13 | | Default Java | 1.8.0_111 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-9097/yetus/patch-mvninstall-service.txt | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-9097/yetus/patch-compile-service.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-9097/yetus/patch-compile-service.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9097/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9097/yetus/diff-checkstyle-service.txt | | modules | C: common service U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9097/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Secure HS2 web UI with PAM > -- > > Key: HIVE-18541 > URL: https://issues.apache.org/jira/browse/HIVE-18541 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18541.1.patch, HIVE-18541.2.patch, > HIVE-18541.5.patch > > > Secure HS2 web UI with PAM. Add two new properties > * hive.server2.webui.use.pam > * Default value: false > * Description: If true, the HiveServer2 WebUI will be
[jira] [Comment Edited] (HIVE-18421) Vectorized execution handles overflows in a different manner than non-vectorized execution
[ https://issues.apache.org/jira/browse/HIVE-18421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356455#comment-16356455 ] Aihua Xu edited comment on HIVE-18421 at 2/8/18 5:50 PM: - [~vihangk1] Sorry for the late reply. I left comment in RB. Basically I don't follow why we need both CHECKED and UNCHECKED implementations. Seems we should only have CHECKED one if UNCHECKED one would generate incorrect result. The user would get incorrect result without notice, right? Of course, even we want to support UNCHECKED implementation, we should still error out/fail the query if there is overflow so the user knows to set the flag to true. BTW: how much performance impact for this and why (don't exactly follow previous discussion)? was (Author: aihuaxu): [~vihangk1] Sorry for the late reply. I left comment in RB. Basically I don't follow why we need both CHECKED and UNCHECKED implementations. Seems we should only have CHECKED one if UNCHECKED one would generate incorrect result. The user would get incorrect result without notice, right? Of course, even we want to support UNCHECKED implementation, we should error out/fail the query if there is overflow so the user knows to set the flag to true. BTW: how much performance impact for this and why (don't exactly follow previous discussion)? > Vectorized execution handles overflows in a different manner than > non-vectorized execution > -- > > Key: HIVE-18421 > URL: https://issues.apache.org/jira/browse/HIVE-18421 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.1, 2.2.0, 3.0.0, 2.3.2 >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18421.01.patch, HIVE-18421.02.patch, > HIVE-18421.03.patch, HIVE-18421.04.patch, HIVE-18421.05.patch, > HIVE-18421.06.patch, HIVE-18421.07.patch > > > In vectorized execution arithmetic operations which cause integer overflows > can give wrong results. Issue is reproducible in both Orc and parquet. > Simple test case to reproduce this issue > {noformat} > set hive.vectorized.execution.enabled=true; > create table parquettable (t1 tinyint, t2 tinyint) stored as parquet; > insert into parquettable values (-104, 25), (-112, 24), (54, 9); > select t1, t2, (t1-t2) as diff from parquettable where (t1-t2) < 50 order by > diff desc; > +---+-+---+ > | t1 | t2 | diff | > +---+-+---+ > | -104 | 25 | 127 | > | -112 | 24 | 120 | > | 54| 9 | 45| > +---+-+---+ > {noformat} > When vectorization is turned off the same query produces only one row. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17835) HS2 Logs print unnecessary stack trace when HoS query is cancelled
[ https://issues.apache.org/jira/browse/HIVE-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357325#comment-16357325 ] Chao Sun commented on HIVE-17835: - Patch LGTM. +1 > HS2 Logs print unnecessary stack trace when HoS query is cancelled > -- > > Key: HIVE-17835 > URL: https://issues.apache.org/jira/browse/HIVE-17835 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-17835.1.patch, HIVE-17835.2.patch, > HIVE-17835.3.patch > > > Example: > {code} > 2017-10-05 17:47:11,881 ERROR > org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: > [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with > exception 'java.lang.InterruptedException(sleep interrupted)' > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124) > at > org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60) > at > org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236) > at > org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89) > at > org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at > org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2017-10-05 17:47:11,881 WARN org.apache.hadoop.hive.ql.Driver: > [HiveServer2-Handler-Pool: Thread-105]: Shutting down task : Stage-2:MAPRED > 2017-10-05 17:47:11,882 ERROR > org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: > [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with > exception 'java.lang.InterruptedException(sleep interrupted)' > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124) > at > org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60) > at > org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236) > at > org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89) > at > org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) > at > org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314) > at > java.util.concurrent.Executors$RunnableAda
[jira] [Commented] (HIVE-15353) Metastore throws NPE if StorageDescriptor.cols is null
[ https://issues.apache.org/jira/browse/HIVE-15353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357352#comment-16357352 ] Anthony Hsu commented on HIVE-15353: [~pvary], thanks for the update! In that case, I think this ticket is no longer needed, as the test you added in HIVE-18509 proves this is no longer a problem. Will resolve this ticket. > Metastore throws NPE if StorageDescriptor.cols is null > -- > > Key: HIVE-15353 > URL: https://issues.apache.org/jira/browse/HIVE-15353 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0, 2.2.0 >Reporter: Anthony Hsu >Assignee: Anthony Hsu >Priority: Major > Attachments: HIVE-15353.1.patch, HIVE-15353.2.patch, > HIVE-15353.3.patch, HIVE-15353.4.patch, HIVE-15353.5.patch > > > When using the HiveMetaStoreClient API directly to talk to the metastore, you > get NullPointerExceptions when StorageDescriptor.cols is null in the > Table/Partition object in the following calls: > * create_table > * alter_table > * alter_partition > Calling add_partition with StorageDescriptor.cols set to null causes null to > be stored in the metastore database and subsequent calls to alter_partition > for that partition to fail with an NPE. > Null checks should be added to eliminate the NPEs in the metastore. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-15353) Metastore throws NPE if StorageDescriptor.cols is null
[ https://issues.apache.org/jira/browse/HIVE-15353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anthony Hsu updated HIVE-15353: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Metastore throws NPE if StorageDescriptor.cols is null > -- > > Key: HIVE-15353 > URL: https://issues.apache.org/jira/browse/HIVE-15353 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0, 2.2.0 >Reporter: Anthony Hsu >Assignee: Anthony Hsu >Priority: Major > Attachments: HIVE-15353.1.patch, HIVE-15353.2.patch, > HIVE-15353.3.patch, HIVE-15353.4.patch, HIVE-15353.5.patch > > > When using the HiveMetaStoreClient API directly to talk to the metastore, you > get NullPointerExceptions when StorageDescriptor.cols is null in the > Table/Partition object in the following calls: > * create_table > * alter_table > * alter_partition > Calling add_partition with StorageDescriptor.cols set to null causes null to > be stored in the metastore database and subsequent calls to alter_partition > for that partition to fail with an NPE. > Null checks should be added to eliminate the NPEs in the metastore. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18547) WM: trigger test may fail
[ https://issues.apache.org/jira/browse/HIVE-18547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357360#comment-16357360 ] Sergey Shelukhin commented on HIVE-18547: - Hmm.. wouldn't it make previously recorded events incorrect if cluster fraction actually changes due to move or RP reconfiguration? The race is visible in the log snippet... > WM: trigger test may fail > - > > Key: HIVE-18547 > URL: https://issues.apache.org/jira/browse/HIVE-18547 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Sergey Shelukhin >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18547.1.patch > > > https://builds.apache.org/job/PreCommit-HIVE-Build/8818/testReport/org.apache.hive.jdbc/TestTriggersMoveWorkloadManager/testTriggerMoveAndKill/ > Looks like the cluster allocation assignment and WM event creation race, > probably because WM returns session to the caller ASAP and then makes the > changes after that. > {noformat} > 'Event: GET Pool: BI Cluster %: 80.00' expected in STDERR capture, but not > found. > ... > 2018-01-24T15:07:31,746 INFO [Workload management master] > tez.WorkloadManager: Processing changes for pool BI: [BI, query parallelism > 1, fraction of the cluster 0.80011920929, fraction used by child pools > 0.0, active sessions 0, initializing sessions 0] > 2018-01-24T15:07:31,746 INFO [Workload management master] > tez.WorkloadManager: Starting 1 queries in pool [BI, query parallelism 1, > fraction of the cluster 0.80011920929, fraction used by child pools 0.0, > active sessions 0, initializing sessions 0] > 2018-01-24T15:07:31,746 INFO [Workload management master] > tez.WorkloadManager: Received a session from AM pool > sessionId=2be29c62-9f2c-40b7-a5eb-6298baf83a34, queueName=default, > user=hiveptest, doAs=false, isOpen=true, isDefault=true, expires in > 588529859ms, WM state poolName=null, clusterFraction=0.0, queryId=null, > killReason=null > 2018-01-24T15:07:31,746 INFO [HiveServer2-Background-Pool: Thread-1377] > tez.WmEvent: Added WMEvent: EventType: GET EventStartTimestamp: 1516835251746 > elapsedTime: 0 wmTezSessionInfo:SessionId: > 2be29c62-9f2c-40b7-a5eb-6298baf83a34 Pool: BI Cluster %: 0.0 > 2018-01-24T15:07:31,746 INFO [Workload management master] > tez.GuaranteedTasksAllocator: Updating 2be29c62-9f2c-40b7-a5eb-6298baf83a34 > with 3 guaranteed tasks > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18541) Secure HS2 web UI with PAM
[ https://issues.apache.org/jira/browse/HIVE-18541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357382#comment-16357382 ] Hive QA commented on HIVE-18541: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909777/HIVE-18541.5.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 12999 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestAddAlterDropIndexes.testDropIndexInvalidDB[Embedded] (batchId=206) org.apache.hadoop.hive.metastore.client.TestDatabases.testGetAllDatabases[Embedded] (batchId=213) org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9097/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9097/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9097/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 22 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909777 - PreCommit-HIVE-Build > Secure HS2 web UI with PAM > -- > > Key: HIVE-18541 > URL: https://issues.apache.org/jira/browse/HIVE-18541 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18541.1.patch, HIVE-18541.2.patch, > HIVE-18541.5.patch > > > Secure HS2 web UI with PAM. Add two new properties > * hive.server2.webui.use.pam > * Default value: false > * Description: If true, the HiveServer2 WebUI will be secured with PAM > * hive.server2.webui.pam.authenticator > * Default value: org.apache.hive.http.security.PamAuthenticator > * Description: Class for PAM authentication -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18359) Extend grouping set limits from int to long
[ https://issues.apache.org/jira/browse/HIVE-18359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357384#comment-16357384 ] Hive QA commented on HIVE-18359: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909703/HIVE-18359.11.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9098/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9098/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9098/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-02-08 18:41:33.061 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-9098/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-02-08 18:41:33.067 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at b8fdd13 HIVE-18511: Fix generated checkstyle errors (Peter Vary, reviewed by Yongzhi Chen) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at b8fdd13 HIVE-18511: Fix generated checkstyle errors (Peter Vary, reviewed by Yongzhi Chen) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-02-08 18:41:35.773 + rm -rf ../yetus + mkdir ../yetus + git gc + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-9098/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorGroupByOperator.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/VirtualColumn.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveGroupingID.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveExpandDistinctAggregatesRule.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/HiveGBOpConvUtil.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/plan/GroupByDesc.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFGrouping.java: does not exist in index error: a/ql/src/test/queries/clientpositive/cte_1.q: does not exist in index error: a/ql/src/test/results/clientpositive/annotate_stats_groupby.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/annotate_stats_groupby2.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/cbo_rp_annotate_stats_groupby.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/groupby_cube1.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/groupby_cube_multi_gby.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/groupby_grouping_id3.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/groupby_grouping_sets1.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/groupby_grouping_sets2.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/group
[jira] [Commented] (HIVE-18421) Vectorized execution handles overflows in a different manner than non-vectorized execution
[ https://issues.apache.org/jira/browse/HIVE-18421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357392#comment-16357392 ] Aihua Xu commented on HIVE-18421: - Talked to [~vihangk1] offline and got better understanding. The non-vectorization version actually is also not handling the overflow properly and that is a larger problem to address overall. [~vihangk1] Do you think we should enable this property to true by default or just have the CHECKED version? > Vectorized execution handles overflows in a different manner than > non-vectorized execution > -- > > Key: HIVE-18421 > URL: https://issues.apache.org/jira/browse/HIVE-18421 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.1, 2.2.0, 3.0.0, 2.3.2 >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18421.01.patch, HIVE-18421.02.patch, > HIVE-18421.03.patch, HIVE-18421.04.patch, HIVE-18421.05.patch, > HIVE-18421.06.patch, HIVE-18421.07.patch > > > In vectorized execution arithmetic operations which cause integer overflows > can give wrong results. Issue is reproducible in both Orc and parquet. > Simple test case to reproduce this issue > {noformat} > set hive.vectorized.execution.enabled=true; > create table parquettable (t1 tinyint, t2 tinyint) stored as parquet; > insert into parquettable values (-104, 25), (-112, 24), (54, 9); > select t1, t2, (t1-t2) as diff from parquettable where (t1-t2) < 50 order by > diff desc; > +---+-+---+ > | t1 | t2 | diff | > +---+-+---+ > | -104 | 25 | 127 | > | -112 | 24 | 120 | > | 54| 9 | 45| > +---+-+---+ > {noformat} > When vectorization is turned off the same query produces only one row. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18647) Cannot create table: Unknown column 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID'
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357400#comment-16357400 ] Vineet Garg commented on HIVE-18647: I am also running into same issue with Derby > Cannot create table: Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' > --- > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch code and mysql as metastore. > Creating table hits this error: > {noformat} > 2018-02-07T22:04:55,438 ERROR [41f91bf4-bc49-4a73-baee-e2a1d79b8a4e main] > metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of > 10) with error: javax.jdo.JDODataStoreException: Insert of object > "org.apache.hadoop.hive.metastore.model.MTable@28d16af8" using statement > "INSERT INTO `TBLS` > (`TBL_ID`,`CREATE_TIME`,`CREATION_METADATA_MV_CREATION_METADATA_ID_OID`,`DB_ID`,`LAST_ACCESS_TIME`,`OWNER`,`RETENTION`,`IS_REWRITE_ENABLED`,`SD_ID`,`TBL_NAME`,`TBL_TYPE`,`VIEW_EXPANDED_TEXT`,`VIEW_ORIGINAL_TEXT`) > VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)" failed : Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' in 'field list' > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:729) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy36.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1506) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1412) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (HIVE-18647) Cannot create table: Unknown column 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID'
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reopened HIVE-18647: > Cannot create table: Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' > --- > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch code and mysql as metastore. > Creating table hits this error: > {noformat} > 2018-02-07T22:04:55,438 ERROR [41f91bf4-bc49-4a73-baee-e2a1d79b8a4e main] > metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of > 10) with error: javax.jdo.JDODataStoreException: Insert of object > "org.apache.hadoop.hive.metastore.model.MTable@28d16af8" using statement > "INSERT INTO `TBLS` > (`TBL_ID`,`CREATE_TIME`,`CREATION_METADATA_MV_CREATION_METADATA_ID_OID`,`DB_ID`,`LAST_ACCESS_TIME`,`OWNER`,`RETENTION`,`IS_REWRITE_ENABLED`,`SD_ID`,`TBL_NAME`,`TBL_TYPE`,`VIEW_EXPANDED_TEXT`,`VIEW_ORIGINAL_TEXT`) > VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)" failed : Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' in 'field list' > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:729) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy36.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1506) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1412) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18647) Cannot create table: "message:Exception thrown when executing query : SELECT DISTINCT.."
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18647: --- Summary: Cannot create table: "message:Exception thrown when executing query : SELECT DISTINCT.." (was: Cannot create table: Unknown column 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID') > Cannot create table: "message:Exception thrown when executing query : SELECT > DISTINCT.." > > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch code and mysql as metastore. > Creating table hits this error: > {noformat} > 2018-02-07T22:04:55,438 ERROR [41f91bf4-bc49-4a73-baee-e2a1d79b8a4e main] > metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of > 10) with error: javax.jdo.JDODataStoreException: Insert of object > "org.apache.hadoop.hive.metastore.model.MTable@28d16af8" using statement > "INSERT INTO `TBLS` > (`TBL_ID`,`CREATE_TIME`,`CREATION_METADATA_MV_CREATION_METADATA_ID_OID`,`DB_ID`,`LAST_ACCESS_TIME`,`OWNER`,`RETENTION`,`IS_REWRITE_ENABLED`,`SD_ID`,`TBL_NAME`,`TBL_TYPE`,`VIEW_EXPANDED_TEXT`,`VIEW_ORIGINAL_TEXT`) > VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)" failed : Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' in 'field list' > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:729) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy36.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1506) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1412) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18647) Cannot create table: "message:Exception thrown when executing query : SELECT DISTINCT.."
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357406#comment-16357406 ] Vineet Garg commented on HIVE-18647: Modified the Jira title to reflect new error/exception message. I get the following exception: {code:java} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Exception thrown when executing query : SELECT DISTINCT 'org.apache.hadoop.hive.metastore.model.MTable' AS NUCLEUS_TYPE,A0.BUCKETING_VERSION,A0.CREATE_TIME,A0.LAST_ACCESS_TIME,A0.LOAD_IN_BUCKETED_TABLE,A0.OWNER,A0.RETENTION,A0.IS_REWRITE_ENABLED,A0.TBL_NAME,A0.TBL_TYPE,A0.TBL_ID FROM TBLS A0 LEFT OUTER JOIN DBS B0 ON A0.DB_ID = B0.DB_ID WHERE A0.TBL_NAME = ? AND B0."NAME" = ?){code} > Cannot create table: "message:Exception thrown when executing query : SELECT > DISTINCT.." > > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch code and mysql as metastore. > Creating table hits this error: > {noformat} > 2018-02-07T22:04:55,438 ERROR [41f91bf4-bc49-4a73-baee-e2a1d79b8a4e main] > metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of > 10) with error: javax.jdo.JDODataStoreException: Insert of object > "org.apache.hadoop.hive.metastore.model.MTable@28d16af8" using statement > "INSERT INTO `TBLS` > (`TBL_ID`,`CREATE_TIME`,`CREATION_METADATA_MV_CREATION_METADATA_ID_OID`,`DB_ID`,`LAST_ACCESS_TIME`,`OWNER`,`RETENTION`,`IS_REWRITE_ENABLED`,`SD_ID`,`TBL_NAME`,`TBL_TYPE`,`VIEW_EXPANDED_TEXT`,`VIEW_ORIGINAL_TEXT`) > VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)" failed : Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' in 'field list' > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:729) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy36.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1506) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1412) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18586) Upgrade Derby to 10.14.1.0
[ https://issues.apache.org/jira/browse/HIVE-18586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357457#comment-16357457 ] Hive QA commented on HIVE-18586: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 26s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} standalone-metastore: The patch generated 0 new + 21 unchanged - 2 fixed = 21 total (was 23) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch core passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} The patch java-client passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 50s{color} | {color:green} root: The patch generated 0 new + 146 unchanged - 2 fixed = 146 total (was 148) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / b8fdd13 | | Default Java | 1.8.0_111 | | modules | C: standalone-metastore hcatalog/core hcatalog/webhcat/java-client . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9099/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade Derby to 10.14.1.0 > -- > > Key: HIVE-18586 > URL: https://issues.apache.org/jira/browse/HIVE-18586 > Project: Hive > Issue Type: Improvement >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-18586.1.patch, HIVE-18586.2.patch, > HIVE-18586.3.patch, HIVE-18586.4.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18389) Print out Spark Web UI URL to the console log
[ https://issues.apache.org/jira/browse/HIVE-18389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18389: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks Peter for the review. > Print out Spark Web UI URL to the console log > - > > Key: HIVE-18389 > URL: https://issues.apache.org/jira/browse/HIVE-18389 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18389.1.patch > > > Should be accessible via {{SparkContext#uiWebUrl}}. It just needs to be sent > from the {{RemoteDriver}} to HS2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18656) Trigger with counter TOTAL_TASKS fails to result in an event even when condition is met
[ https://issues.apache.org/jira/browse/HIVE-18656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar reassigned HIVE-18656: --- > Trigger with counter TOTAL_TASKS fails to result in an event even when > condition is met > --- > > Key: HIVE-18656 > URL: https://issues.apache.org/jira/browse/HIVE-18656 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 > Environment: Trigger involving counter TOTAL_TASKS seems to fail to > trigger event in definition even when the trigger condition is met > Trigger definition: > {noformat} > ++ > |line| > ++ > | plan_1[status=ACTIVE,parallelism=null,defaultPool=default] | > | + default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] | > | | mapped for default | > | + | > | | trigger limit_task_per_vertex_trigger: if (TOTAL_TASKS > 5) { KILL > } | > ++ > {noformat} > Query is finishing fine even when one vertex is having 29 tasks > {noformat} > INFO : Query ID = hive_20180208193705_73642730-2c6b-4d4d-a608-a849b147bc37 > INFO : Total jobs = 1 > INFO : Launching Job 1 out of 1 > INFO : Starting task [Stage-1:MAPRED] in serial mode > INFO : Subscribed to counters: [TOTAL_TASKS] for queryId: > hive_20180208193705_73642730-2c6b-4d4d-a608-a849b147bc37 > INFO : Tez session hasn't been created yet. Opening session > INFO : Dag name: with ssales as > (select c_last_name...ssales) (Stage-1) > INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 > INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 > INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 > INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 > INFO : Status: Running (Executing on YARN cluster with App id > application_151782410_0199) > -- > VERTICES MODESTATUS TOTAL COMPLETED RUNNING PENDING > FAILED KILLED > -- > Map 6 .. container SUCCEEDED 1 100 > 0 0 > Map 8 .. container SUCCEEDED 1 100 > 0 0 > Map 7 .. container SUCCEEDED 1 100 > 0 0 > Map 9 .. container SUCCEEDED 1 100 > 0 0 > Map 10 . container SUCCEEDED 3 300 > 0 0 > Map 11 . container SUCCEEDED 1 100 > 0 0 > Map 12 . container SUCCEEDED 1 100 > 0 0 > Map 13 . container SUCCEEDED 3 300 > 0 0 > Map 1 .. container SUCCEEDED 9 900 > 0 0 > Reducer 2 .. container SUCCEEDED 2 200 > 0 0 > Reducer 4 .. container SUCCEEDED 29 2900 > 0 0 > Reducer 5 .. container SUCCEEDED 1 100 > 0 0 > Reducer 3container SUCCEEDED 0 000 > 0 0 > -- > VERTICES: 12/13 [==>>] 100% ELAPSED TIME: 21.15 s > -- > INFO : Status: DAG finished successfully in 21.07 seconds > {noformat} >Reporter: Aswathy Chellammal Sreekumar >Assignee: Prasanth Jayachandran >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-18656) Trigger with counter TOTAL_TASKS fails to result in an event even when condition is met
[ https://issues.apache.org/jira/browse/HIVE-18656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aswathy Chellammal Sreekumar resolved HIVE-18656. - Resolution: Invalid Design for the counter to separate the counters at vertex level and dag level > Trigger with counter TOTAL_TASKS fails to result in an event even when > condition is met > --- > > Key: HIVE-18656 > URL: https://issues.apache.org/jira/browse/HIVE-18656 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 > Environment: Trigger involving counter TOTAL_TASKS seems to fail to > trigger event in definition even when the trigger condition is met > Trigger definition: > {noformat} > ++ > |line| > ++ > | plan_1[status=ACTIVE,parallelism=null,defaultPool=default] | > | + default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] | > | | mapped for default | > | + | > | | trigger limit_task_per_vertex_trigger: if (TOTAL_TASKS > 5) { KILL > } | > ++ > {noformat} > Query is finishing fine even when one vertex is having 29 tasks > {noformat} > INFO : Query ID = hive_20180208193705_73642730-2c6b-4d4d-a608-a849b147bc37 > INFO : Total jobs = 1 > INFO : Launching Job 1 out of 1 > INFO : Starting task [Stage-1:MAPRED] in serial mode > INFO : Subscribed to counters: [TOTAL_TASKS] for queryId: > hive_20180208193705_73642730-2c6b-4d4d-a608-a849b147bc37 > INFO : Tez session hasn't been created yet. Opening session > INFO : Dag name: with ssales as > (select c_last_name...ssales) (Stage-1) > INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 > INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 > INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 > INFO : Setting tez.task.scale.memory.reserve-fraction to 0.3001192092896 > INFO : Status: Running (Executing on YARN cluster with App id > application_151782410_0199) > -- > VERTICES MODESTATUS TOTAL COMPLETED RUNNING PENDING > FAILED KILLED > -- > Map 6 .. container SUCCEEDED 1 100 > 0 0 > Map 8 .. container SUCCEEDED 1 100 > 0 0 > Map 7 .. container SUCCEEDED 1 100 > 0 0 > Map 9 .. container SUCCEEDED 1 100 > 0 0 > Map 10 . container SUCCEEDED 3 300 > 0 0 > Map 11 . container SUCCEEDED 1 100 > 0 0 > Map 12 . container SUCCEEDED 1 100 > 0 0 > Map 13 . container SUCCEEDED 3 300 > 0 0 > Map 1 .. container SUCCEEDED 9 900 > 0 0 > Reducer 2 .. container SUCCEEDED 2 200 > 0 0 > Reducer 4 .. container SUCCEEDED 29 2900 > 0 0 > Reducer 5 .. container SUCCEEDED 1 100 > 0 0 > Reducer 3container SUCCEEDED 0 000 > 0 0 > -- > VERTICES: 12/13 [==>>] 100% ELAPSED TIME: 21.15 s > -- > INFO : Status: DAG finished successfully in 21.07 seconds > {noformat} >Reporter: Aswathy Chellammal Sreekumar >Assignee: Prasanth Jayachandran >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18586) Upgrade Derby to 10.14.1.0
[ https://issues.apache.org/jira/browse/HIVE-18586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357495#comment-16357495 ] Hive QA commented on HIVE-18586: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909660/HIVE-18586.4.patch {color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 12995 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded] (batchId=206) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9099/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9099/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9099/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 22 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909660 - PreCommit-HIVE-Build > Upgrade Derby to 10.14.1.0 > -- > > Key: HIVE-18586 > URL: https://issues.apache.org/jira/browse/HIVE-18586 > Project: Hive > Issue Type: Improvement >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-18586.1.patch, HIVE-18586.2.patch, > HIVE-18586.3.patch, HIVE-18586.4.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18359) Extend grouping set limits from int to long
[ https://issues.apache.org/jira/browse/HIVE-18359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18359: - Attachment: HIVE-18359.11.patch > Extend grouping set limits from int to long > --- > > Key: HIVE-18359 > URL: https://issues.apache.org/jira/browse/HIVE-18359 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18359.1.patch, HIVE-18359.10.patch, > HIVE-18359.11.patch, HIVE-18359.11.patch, HIVE-18359.2.patch, > HIVE-18359.3.patch, HIVE-18359.4.patch, HIVE-18359.5.patch, > HIVE-18359.6.patch, HIVE-18359.7.patch, HIVE-18359.8.patch, HIVE-18359.9.patch > > > Grouping sets is broken for >32 columns because of usage of Int for bitmap > (also GROUPING__ID virtual column). This assumption breaks grouping > sets/rollups/cube when number of participating aggregation columns is >32. > The easier fix would be extend it to Long for now. The correct fix would be > to use BitSets everywhere but that would require GROUPING__ID column type to > binary which will make predicates on GROUPING__ID difficult to deal with. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18359) Extend grouping set limits from int to long
[ https://issues.apache.org/jira/browse/HIVE-18359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357507#comment-16357507 ] Prasanth Jayachandran commented on HIVE-18359: -- Rebased patch > Extend grouping set limits from int to long > --- > > Key: HIVE-18359 > URL: https://issues.apache.org/jira/browse/HIVE-18359 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18359.1.patch, HIVE-18359.10.patch, > HIVE-18359.11.patch, HIVE-18359.11.patch, HIVE-18359.2.patch, > HIVE-18359.3.patch, HIVE-18359.4.patch, HIVE-18359.5.patch, > HIVE-18359.6.patch, HIVE-18359.7.patch, HIVE-18359.8.patch, HIVE-18359.9.patch > > > Grouping sets is broken for >32 columns because of usage of Int for bitmap > (also GROUPING__ID virtual column). This assumption breaks grouping > sets/rollups/cube when number of participating aggregation columns is >32. > The easier fix would be extend it to Long for now. The correct fix would be > to use BitSets everywhere but that would require GROUPING__ID column type to > binary which will make predicates on GROUPING__ID difficult to deal with. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17835) HS2 Logs print unnecessary stack trace when HoS query is cancelled
[ https://issues.apache.org/jira/browse/HIVE-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357517#comment-16357517 ] Hive QA commented on HIVE-17835: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 43e7137 | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9100/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HS2 Logs print unnecessary stack trace when HoS query is cancelled > -- > > Key: HIVE-17835 > URL: https://issues.apache.org/jira/browse/HIVE-17835 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-17835.1.patch, HIVE-17835.2.patch, > HIVE-17835.3.patch > > > Example: > {code} > 2017-10-05 17:47:11,881 ERROR > org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: > [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with > exception 'java.lang.InterruptedException(sleep interrupted)' > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124) > at > org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60) > at > org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1748) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1501) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1285) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1280) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236) > at > org.apache.hive.service.cli.operation.SQLOperation.access$300(
[jira] [Commented] (HIVE-18647) Cannot create table: "message:Exception thrown when executing query : SELECT DISTINCT.."
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357534#comment-16357534 ] Deepak Jaiswal commented on HIVE-18647: --- Thanks for bringing this up. I am working on the fix. > Cannot create table: "message:Exception thrown when executing query : SELECT > DISTINCT.." > > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch code and mysql as metastore. > Creating table hits this error: > {noformat} > 2018-02-07T22:04:55,438 ERROR [41f91bf4-bc49-4a73-baee-e2a1d79b8a4e main] > metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of > 10) with error: javax.jdo.JDODataStoreException: Insert of object > "org.apache.hadoop.hive.metastore.model.MTable@28d16af8" using statement > "INSERT INTO `TBLS` > (`TBL_ID`,`CREATE_TIME`,`CREATION_METADATA_MV_CREATION_METADATA_ID_OID`,`DB_ID`,`LAST_ACCESS_TIME`,`OWNER`,`RETENTION`,`IS_REWRITE_ENABLED`,`SD_ID`,`TBL_NAME`,`TBL_TYPE`,`VIEW_EXPANDED_TEXT`,`VIEW_ORIGINAL_TEXT`) > VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)" failed : Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' in 'field list' > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:729) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy36.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1506) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1412) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18647) Cannot create table: "message:Exception thrown when executing query : SELECT DISTINCT.."
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-18647: - Assignee: Deepak Jaiswal > Cannot create table: "message:Exception thrown when executing query : SELECT > DISTINCT.." > > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch code and mysql as metastore. > Creating table hits this error: > {noformat} > 2018-02-07T22:04:55,438 ERROR [41f91bf4-bc49-4a73-baee-e2a1d79b8a4e main] > metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of > 10) with error: javax.jdo.JDODataStoreException: Insert of object > "org.apache.hadoop.hive.metastore.model.MTable@28d16af8" using statement > "INSERT INTO `TBLS` > (`TBL_ID`,`CREATE_TIME`,`CREATION_METADATA_MV_CREATION_METADATA_ID_OID`,`DB_ID`,`LAST_ACCESS_TIME`,`OWNER`,`RETENTION`,`IS_REWRITE_ENABLED`,`SD_ID`,`TBL_NAME`,`TBL_TYPE`,`VIEW_EXPANDED_TEXT`,`VIEW_ORIGINAL_TEXT`) > VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)" failed : Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' in 'field list' > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:729) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy36.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1506) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1412) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18368) Improve Spark Debug RDD Graph
[ https://issues.apache.org/jira/browse/HIVE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18368: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks Rui for the review. > Improve Spark Debug RDD Graph > - > > Key: HIVE-18368 > URL: https://issues.apache.org/jira/browse/HIVE-18368 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Fix For: 3.0.0 > > Attachments: Completed Stages.png, HIVE-18368.1.patch, > HIVE-18368.2.patch, HIVE-18368.3.patch, HIVE-18368.4.patch, Job Ids.png, > Stage DAG 1.png, Stage DAG 2.png > > > The {{SparkPlan}} class does some logging to show the mapping between > different {{SparkTran}}, what shuffle types are used, and what trans are > cached. However, there is room for improvement. > When debug logging is enabled the RDD graph is logged, but there isn't much > information printed about each RDD. > We should combine both of the graphs and improve them. We could even make the > Spark Plan graph part of the {{EXPLAIN EXTENDED}} output. > Ideally, the final graph shows a clear relationship between Tran objects, > RDDs, and BaseWorks. Edge should include information about number of > partitions, shuffle types, Spark operations used, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17983) Make the standalone metastore generate tarballs etc.
[ https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-17983: -- Attachment: HIVE-17983.5.patch > Make the standalone metastore generate tarballs etc. > > > Key: HIVE-17983 > URL: https://issues.apache.org/jira/browse/HIVE-17983 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Labels: pull-request-available > Attachments: HIVE-17983.2.patch, HIVE-17983.3.patch, > HIVE-17983.4.patch, HIVE-17983.5.patch, HIVE-17983.patch > > > In order to be separately installable the standalone metastore needs its own > tarballs, startup scripts, etc. All of the SQL installation and upgrade > scripts also need to move from metastore to standalone-metastore. > I also plan to create Dockerfiles for different database types so that > developers can test the SQL installation and upgrade scripts. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18657) Fix checkstyle violations for Semantic Analyzer
[ https://issues.apache.org/jira/browse/HIVE-18657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-18657: -- > Fix checkstyle violations for Semantic Analyzer > --- > > Key: HIVE-18657 > URL: https://issues.apache.org/jira/browse/HIVE-18657 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > > SemanticAnalyzer.java has quite a few checkstyle violations which should be > fixed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17284) remove OrcRecordUpdater.deleteEventIndexBuilder
[ https://issues.apache.org/jira/browse/HIVE-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-17284: -- Description: There is no point in it. We know how many rows a delete_delta file has from ORC and they are all the same type - so no need for AcidStats. hive.acid.key.index has no value since delete_delta files are never split and are not likely to have more than 1 stripe since they are very small. Also can remove KeyIndexBuilder.acidStats - we only have 1 type of event per file if doing this, make sure to fix {{OrcInputFormat.isOriginal(Reader)}} and {{OrcInputFormat.isOriginal(Footer)}} etc was: There is no point in it. We know how many rows a delete_delta file has from ORC and they are all the same type - so no need for AcidStats. hive.acid.key.index has no value since delete_delta files are never split and are not likely to have more than 1 stripe since they are very small. Also can remove KeyIndexBuilder.acidStats - we only have 1 type of event per file > remove OrcRecordUpdater.deleteEventIndexBuilder > --- > > Key: HIVE-17284 > URL: https://issues.apache.org/jira/browse/HIVE-17284 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Minor > > There is no point in it. We know how many rows a delete_delta file has from > ORC and they are all the same type - so no need for AcidStats. > hive.acid.key.index has no value since delete_delta files are never split > and are not likely to have more than 1 stripe since they are very small. > Also can remove KeyIndexBuilder.acidStats - we only have 1 type of event per > file > > if doing this, make sure to fix {{OrcInputFormat.isOriginal(Reader)}} and > {{OrcInputFormat.isOriginal(Footer)}} etc -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-18647) Cannot create table: "message:Exception thrown when executing query : SELECT DISTINCT.."
[ https://issues.apache.org/jira/browse/HIVE-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal resolved HIVE-18647. --- Resolution: Fixed Reverted HIVE-18350. Will work on it with different approach. > Cannot create table: "message:Exception thrown when executing query : SELECT > DISTINCT.." > > > Key: HIVE-18647 > URL: https://issues.apache.org/jira/browse/HIVE-18647 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Assignee: Deepak Jaiswal >Priority: Major > Fix For: 3.0.0 > > > I'm using latest master branch code and mysql as metastore. > Creating table hits this error: > {noformat} > 2018-02-07T22:04:55,438 ERROR [41f91bf4-bc49-4a73-baee-e2a1d79b8a4e main] > metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of > 10) with error: javax.jdo.JDODataStoreException: Insert of object > "org.apache.hadoop.hive.metastore.model.MTable@28d16af8" using statement > "INSERT INTO `TBLS` > (`TBL_ID`,`CREATE_TIME`,`CREATION_METADATA_MV_CREATION_METADATA_ID_OID`,`DB_ID`,`LAST_ACCESS_TIME`,`OWNER`,`RETENTION`,`IS_REWRITE_ENABLED`,`SD_ID`,`TBL_NAME`,`TBL_TYPE`,`VIEW_EXPANDED_TEXT`,`VIEW_ORIGINAL_TEXT`) > VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?)" failed : Unknown column > 'CREATION_METADATA_MV_CREATION_METADATA_ID_OID' in 'field list' > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543) > at > org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:729) > at > org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749) > at > org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:1125) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) > at com.sun.proxy.$Proxy36.createTable(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1506) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1412) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1614) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (HIVE-18350) load data should rename files consistent with insert statements
[ https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reopened HIVE-18350: --- > load data should rename files consistent with insert statements > --- > > Key: HIVE-18350 > URL: https://issues.apache.org/jira/browse/HIVE-18350 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18350.1.patch, HIVE-18350.10.patch, > HIVE-18350.11.patch, HIVE-18350.12.patch, HIVE-18350.13.patch, > HIVE-18350.14.patch, HIVE-18350.15.patch, HIVE-18350.16.patch, > HIVE-18350.2.patch, HIVE-18350.3.patch, HIVE-18350.4.patch, > HIVE-18350.5.patch, HIVE-18350.6.patch, HIVE-18350.7.patch, > HIVE-18350.8.patch, HIVE-18350.9.patch > > > Insert statements create files of format ending with _0, 0001_0 etc. > However, the load data uses the input file name. That results in inconsistent > naming convention which makes SMB joins difficult in some scenarios and may > cause trouble for other types of queries in future. > We need consistent naming convention. > For non-bucketed table, hive renames all the files regardless of how they > were named by the user. > For bucketed table, hive relies on user to name the files matching the > bucket in non-strict mode. Hive assumes that the data belongs to same bucket > in a file. In strict mode, loading bucketed table is disabled. > This will likely affect most of the tests which load data which is pretty > significant due to which it is further divided into two subtasks for smoother > merge. > For existing tables in customer database, it is recommended to reload > bucketed tables otherwise if customer tries to run SMB join and there is a > bucket for which there is no split, then there is a possibility of getting > incorrect results. However, this is not a regression as it would happen even > without the patch. > With this patch however, and reloading data, the results should be correct. > For non-bucketed tables and external tables, there is no difference in > behavior and reloading data is not needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17835) HS2 Logs print unnecessary stack trace when HoS query is cancelled
[ https://issues.apache.org/jira/browse/HIVE-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357591#comment-16357591 ] Hive QA commented on HIVE-17835: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909664/HIVE-17835.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 12995 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_opt_shuffle_serde] (batchId=180) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query1] (batchId=250) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.authorization.TestCLIAuthzSessionContext.testAuthzSessionContextContents (batchId=238) org.apache.hive.spark.client.rpc.TestRpc.testClientTimeout (batchId=297) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9100/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9100/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9100/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 25 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909664 - PreCommit-HIVE-Build > HS2 Logs print unnecessary stack trace when HoS query is cancelled > -- > > Key: HIVE-17835 > URL: https://issues.apache.org/jira/browse/HIVE-17835 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-17835.1.patch, HIVE-17835.2.patch, > HIVE-17835.3.patch > > > Example: > {code} > 2017-10-05 17:47:11,881 ERROR > org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor: > [HiveServer2-Background-Pool: Thread-131]: Failed to monitor Job[ 2] with > exception 'java.lang.InterruptedException(sleep interrupted)' > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hive.ql.exec.spark.status.RemoteSparkJobMonitor.startMonitor(RemoteSparkJobMonitor.java:124) > at > org.apache.hadoop.hive.ql.exec.spark.status.impl.RemoteSparkJobRef.monitorJob(RemoteSparkJobRef.java:60) > at > org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:111) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2052) > at org.apache.hadoop.hive.ql
[jira] [Commented] (HIVE-18456) Add some tests for HIVE-18367 to check that the table information contains the query correctly
[ https://issues.apache.org/jira/browse/HIVE-18456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357612#comment-16357612 ] Hive QA commented on HIVE-18456: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 3464df4 | | Default Java | 1.8.0_111 | | modules | C: itests/hive-unit U: itests/hive-unit | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9102/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add some tests for HIVE-18367 to check that the table information contains > the query correctly > -- > > Key: HIVE-18456 > URL: https://issues.apache.org/jira/browse/HIVE-18456 > Project: Hive > Issue Type: Bug >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18456.1.patch, HIVE-18456.2.patch > > > This cannot be tested with a CliDriver test so add a java test to check the > output of 'describe extended', which is changed by HIVE-18367 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18218) SMB Join : Handle buckets with no splits.
[ https://issues.apache.org/jira/browse/HIVE-18218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18218: -- Attachment: HIVE-18218.2.patch > SMB Join : Handle buckets with no splits. > - > > Key: HIVE-18218 > URL: https://issues.apache.org/jira/browse/HIVE-18218 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18218.1.patch, HIVE-18218.2.patch > > > While working on HIVE-18208, it was found that with SMB, the results are > incorrect. This most likely is a product issue. > auto_sortmerge_join_16 fails with wrong results due to this. > cc [~hagleitn] > The current logic in CustomPartitionVertex assumes that there is a split for > each bucket whereas in Tez, we can have no splits for empty buckets. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18218) SMB Join : Handle buckets with no splits.
[ https://issues.apache.org/jira/browse/HIVE-18218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18218: -- Attachment: HIVE-18218.3.patch > SMB Join : Handle buckets with no splits. > - > > Key: HIVE-18218 > URL: https://issues.apache.org/jira/browse/HIVE-18218 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18218.1.patch, HIVE-18218.2.patch, > HIVE-18218.3.patch > > > While working on HIVE-18208, it was found that with SMB, the results are > incorrect. This most likely is a product issue. > auto_sortmerge_join_16 fails with wrong results due to this. > cc [~hagleitn] > The current logic in CustomPartitionVertex assumes that there is a split for > each bucket whereas in Tez, we can have no splits for empty buckets. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18387) Minimize time that REBUILD locks the materialized view
[ https://issues.apache.org/jira/browse/HIVE-18387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18387: --- Attachment: HIVE-18387.04.patch > Minimize time that REBUILD locks the materialized view > -- > > Key: HIVE-18387 > URL: https://issues.apache.org/jira/browse/HIVE-18387 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18387.01.patch, HIVE-18387.02.patch, > HIVE-18387.03.patch, HIVE-18387.04.patch, HIVE-18387.patch > > > Currently, REBUILD will block the materialized view while the final move task > is being executed. The idea for this improvement is to create the new > materialization in a new folder (new version) and then just flip the pointer > to the folder in the MV definition in the metastore. REBUILD operations for a > given MV should get an exclusive lock though, i.e., they cannot be executed > concurrently. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18622) Vectorization: IF Statements, Comparisons, and more do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18622: Attachment: HIVE-18622.05.patch > Vectorization: IF Statements, Comparisons, and more do not handle NULLs > correctly > - > > Key: HIVE-18622 > URL: https://issues.apache.org/jira/browse/HIVE-18622 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18622.03.patch, HIVE-18622.04.patch, > HIVE-18622.05.patch > > > > Many vector expression classes are setting noNulls to true which does not > work if the VRB is a scratch column being reused. The previous use may have > set noNulls to false and the isNull array will have some rows marked as NULL. > The result is wrong query results and sometimes NPEs (for BytesColumnVector). > So, many vector expressions need this: > {code:java} > // Carefully handle NULLs... > /* >* For better performance on LONG/DOUBLE we don't want the conditional >* statements inside the for loop. >*/ > outputColVector.noNulls = false; > {code} > And, vector expressions need to make sure the isNull array entry is set when > outputColVector.noNulls is false. > And, all place that assign column value need to set noNulls to false when the > value is NULL. > Almost all cases where noNulls is set to true are incorrect. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18387) Minimize time that REBUILD locks the materialized view
[ https://issues.apache.org/jira/browse/HIVE-18387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18387: --- Attachment: HIVE-18387.04.patch > Minimize time that REBUILD locks the materialized view > -- > > Key: HIVE-18387 > URL: https://issues.apache.org/jira/browse/HIVE-18387 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18387.01.patch, HIVE-18387.02.patch, > HIVE-18387.03.patch, HIVE-18387.04.patch, HIVE-18387.patch > > > Currently, REBUILD will block the materialized view while the final move task > is being executed. The idea for this improvement is to create the new > materialization in a new folder (new version) and then just flip the pointer > to the folder in the MV definition in the metastore. REBUILD operations for a > given MV should get an exclusive lock though, i.e., they cannot be executed > concurrently. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18387) Minimize time that REBUILD locks the materialized view
[ https://issues.apache.org/jira/browse/HIVE-18387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18387: --- Attachment: (was: HIVE-18387.04.patch) > Minimize time that REBUILD locks the materialized view > -- > > Key: HIVE-18387 > URL: https://issues.apache.org/jira/browse/HIVE-18387 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18387.01.patch, HIVE-18387.02.patch, > HIVE-18387.03.patch, HIVE-18387.04.patch, HIVE-18387.patch > > > Currently, REBUILD will block the materialized view while the final move task > is being executed. The idea for this improvement is to create the new > materialization in a new folder (new version) and then just flip the pointer > to the folder in the MV definition in the metastore. REBUILD operations for a > given MV should get an exclusive lock though, i.e., they cannot be executed > concurrently. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18492) Wrong argument in the WorkloadManager.resetAndQueryKill()
[ https://issues.apache.org/jira/browse/HIVE-18492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-18492: --- Assignee: Oleg Danilov (was: Sergey Shelukhin) > Wrong argument in the WorkloadManager.resetAndQueryKill() > - > > Key: HIVE-18492 > URL: https://issues.apache.org/jira/browse/HIVE-18492 > Project: Hive > Issue Type: Bug >Reporter: Oleg Danilov >Assignee: Oleg Danilov >Priority: Trivial > Attachments: HIVE-18492.03.patch, HIVE-18492.04.patch, > HIVE-18492.2.patch, HIVE-18492.patch > > > Caused by HIVE-18088, [~prasanth_j], could you please check this? > {code:java} > private void resetAndQueueKill(Map > toKillQuery, > KillQueryContext killQueryContext, Map toReuse) { > WmTezSession toKill = killQueryContext.session; > ... > if (poolState != null) { > poolState.getSessions().remove(toKill); > poolState.getInitializingSessions().remove(toKill); > ... > {code} > getInitializingSessions() returns List of SessionInitContext, so toKill > definitely can't be in this list and therefore no needs to remove it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18492) Wrong argument in the WorkloadManager.resetAndQueryKill()
[ https://issues.apache.org/jira/browse/HIVE-18492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18492: Attachment: HIVE-18492.04.patch > Wrong argument in the WorkloadManager.resetAndQueryKill() > - > > Key: HIVE-18492 > URL: https://issues.apache.org/jira/browse/HIVE-18492 > Project: Hive > Issue Type: Bug >Reporter: Oleg Danilov >Assignee: Sergey Shelukhin >Priority: Trivial > Attachments: HIVE-18492.03.patch, HIVE-18492.04.patch, > HIVE-18492.2.patch, HIVE-18492.patch > > > Caused by HIVE-18088, [~prasanth_j], could you please check this? > {code:java} > private void resetAndQueueKill(Map > toKillQuery, > KillQueryContext killQueryContext, Map toReuse) { > WmTezSession toKill = killQueryContext.session; > ... > if (poolState != null) { > poolState.getSessions().remove(toKill); > poolState.getInitializingSessions().remove(toKill); > ... > {code} > getInitializingSessions() returns List of SessionInitContext, so toKill > definitely can't be in this list and therefore no needs to remove it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18492) Wrong argument in the WorkloadManager.resetAndQueryKill()
[ https://issues.apache.org/jira/browse/HIVE-18492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-18492: --- Assignee: Sergey Shelukhin (was: Oleg Danilov) > Wrong argument in the WorkloadManager.resetAndQueryKill() > - > > Key: HIVE-18492 > URL: https://issues.apache.org/jira/browse/HIVE-18492 > Project: Hive > Issue Type: Bug >Reporter: Oleg Danilov >Assignee: Sergey Shelukhin >Priority: Trivial > Attachments: HIVE-18492.03.patch, HIVE-18492.04.patch, > HIVE-18492.2.patch, HIVE-18492.patch > > > Caused by HIVE-18088, [~prasanth_j], could you please check this? > {code:java} > private void resetAndQueueKill(Map > toKillQuery, > KillQueryContext killQueryContext, Map toReuse) { > WmTezSession toKill = killQueryContext.session; > ... > if (poolState != null) { > poolState.getSessions().remove(toKill); > poolState.getInitializingSessions().remove(toKill); > ... > {code} > getInitializingSessions() returns List of SessionInitContext, so toKill > definitely can't be in this list and therefore no needs to remove it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18492) Wrong argument in the WorkloadManager.resetAndQueryKill()
[ https://issues.apache.org/jira/browse/HIVE-18492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357666#comment-16357666 ] Sergey Shelukhin commented on HIVE-18492: - Modified the patch to use ==, since we should never have more than one object for the same session. [~prasanth_j] does that make sense? > Wrong argument in the WorkloadManager.resetAndQueryKill() > - > > Key: HIVE-18492 > URL: https://issues.apache.org/jira/browse/HIVE-18492 > Project: Hive > Issue Type: Bug >Reporter: Oleg Danilov >Assignee: Oleg Danilov >Priority: Trivial > Attachments: HIVE-18492.03.patch, HIVE-18492.04.patch, > HIVE-18492.2.patch, HIVE-18492.patch > > > Caused by HIVE-18088, [~prasanth_j], could you please check this? > {code:java} > private void resetAndQueueKill(Map > toKillQuery, > KillQueryContext killQueryContext, Map toReuse) { > WmTezSession toKill = killQueryContext.session; > ... > if (poolState != null) { > poolState.getSessions().remove(toKill); > poolState.getInitializingSessions().remove(toKill); > ... > {code} > getInitializingSessions() returns List of SessionInitContext, so toKill > definitely can't be in this list and therefore no needs to remove it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18456) Add some tests for HIVE-18367 to check that the table information contains the query correctly
[ https://issues.apache.org/jira/browse/HIVE-18456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16357668#comment-16357668 ] Hive QA commented on HIVE-18456: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12909670/HIVE-18456.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 12995 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_opt_shuffle_serde] (batchId=180) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query1] (batchId=250) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query39] (batchId=250) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestAddPartitionsFromPartSpec.testAddPartitionSpecChangeRootPathToNull[Embedded] (batchId=206) org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9102/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9102/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9102/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 24 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12909670 - PreCommit-HIVE-Build > Add some tests for HIVE-18367 to check that the table information contains > the query correctly > -- > > Key: HIVE-18456 > URL: https://issues.apache.org/jira/browse/HIVE-18456 > Project: Hive > Issue Type: Bug >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18456.1.patch, HIVE-18456.2.patch > > > This cannot be tested with a CliDriver test so add a java test to check the > output of 'describe extended', which is changed by HIVE-18367 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18659) add acid version marker to acid files
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-18659: - > add acid version marker to acid files > - > > Key: HIVE-18659 > URL: https://issues.apache.org/jira/browse/HIVE-18659 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > > add acid version marker to acid files so that we know which version of acid > wrote the file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18659) add acid version marker to acid files
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18659: -- Status: Patch Available (was: Open) > add acid version marker to acid files > - > > Key: HIVE-18659 > URL: https://issues.apache.org/jira/browse/HIVE-18659 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18659.01.patch > > > add acid version marker to acid files so that we know which version of acid > wrote the file -- This message was sent by Atlassian JIRA (v7.6.3#76005)