[jira] [Updated] (HIVE-17856) MM tables - IOW is not ACID compliant
[ https://issues.apache.org/jira/browse/HIVE-17856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-17856: -- Status: Patch Available (was: Open) > MM tables - IOW is not ACID compliant > - > > Key: HIVE-17856 > URL: https://issues.apache.org/jira/browse/HIVE-17856 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Steve Yeom > Labels: mm-gap-1 > Attachments: HIVE-17856.1.patch > > > The following tests were removed from mm_all during "integration"... I should > have never allowed such manner of intergration. > MM logic should have been kept intact until ACID logic could catch up. Alas, > here we are. > {noformat} > drop table iow0_mm; > create table iow0_mm(key int) tblproperties("transactional"="true", > "transactional_properties"="insert_only"); > insert overwrite table iow0_mm select key from intermediate; > insert into table iow0_mm select key + 1 from intermediate; > select * from iow0_mm order by key; > insert overwrite table iow0_mm select key + 2 from intermediate; > select * from iow0_mm order by key; > drop table iow0_mm; > drop table iow1_mm; > create table iow1_mm(key int) partitioned by (key2 int) > tblproperties("transactional"="true", > "transactional_properties"="insert_only"); > insert overwrite table iow1_mm partition (key2) > select key as k1, key from intermediate union all select key as k1, key from > intermediate; > insert into table iow1_mm partition (key2) > select key + 1 as k1, key from intermediate union all select key as k1, key > from intermediate; > select * from iow1_mm order by key, key2; > insert overwrite table iow1_mm partition (key2) > select key + 3 as k1, key from intermediate union all select key + 4 as k1, > key from intermediate; > select * from iow1_mm order by key, key2; > insert overwrite table iow1_mm partition (key2) > select key + 3 as k1, key + 3 from intermediate union all select key + 2 as > k1, key + 2 from intermediate; > select * from iow1_mm order by key, key2; > drop table iow1_mm; > {noformat} > {noformat} > drop table simple_mm; > create table simple_mm(key int) stored as orc tblproperties > ("transactional"="true", "transactional_properties"="insert_only"); > insert into table simple_mm select key from intermediate; > -insert overwrite table simple_mm select key from intermediate; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17856) MM tables - IOW is not ACID compliant
[ https://issues.apache.org/jira/browse/HIVE-17856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-17856: -- Attachment: HIVE-17856.1.patch > MM tables - IOW is not ACID compliant > - > > Key: HIVE-17856 > URL: https://issues.apache.org/jira/browse/HIVE-17856 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Steve Yeom > Labels: mm-gap-1 > Attachments: HIVE-17856.1.patch > > > The following tests were removed from mm_all during "integration"... I should > have never allowed such manner of intergration. > MM logic should have been kept intact until ACID logic could catch up. Alas, > here we are. > {noformat} > drop table iow0_mm; > create table iow0_mm(key int) tblproperties("transactional"="true", > "transactional_properties"="insert_only"); > insert overwrite table iow0_mm select key from intermediate; > insert into table iow0_mm select key + 1 from intermediate; > select * from iow0_mm order by key; > insert overwrite table iow0_mm select key + 2 from intermediate; > select * from iow0_mm order by key; > drop table iow0_mm; > drop table iow1_mm; > create table iow1_mm(key int) partitioned by (key2 int) > tblproperties("transactional"="true", > "transactional_properties"="insert_only"); > insert overwrite table iow1_mm partition (key2) > select key as k1, key from intermediate union all select key as k1, key from > intermediate; > insert into table iow1_mm partition (key2) > select key + 1 as k1, key from intermediate union all select key as k1, key > from intermediate; > select * from iow1_mm order by key, key2; > insert overwrite table iow1_mm partition (key2) > select key + 3 as k1, key from intermediate union all select key + 4 as k1, > key from intermediate; > select * from iow1_mm order by key, key2; > insert overwrite table iow1_mm partition (key2) > select key + 3 as k1, key + 3 from intermediate union all select key + 2 as > k1, key + 2 from intermediate; > select * from iow1_mm order by key, key2; > drop table iow1_mm; > {noformat} > {noformat} > drop table simple_mm; > create table simple_mm(key int) stored as orc tblproperties > ("transactional"="true", "transactional_properties"="insert_only"); > insert into table simple_mm select key from intermediate; > -insert overwrite table simple_mm select key from intermediate; > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17985) When check the partitions size in the partitioned table, it will throw NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-17985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239988#comment-16239988 ] Hive QA commented on HIVE-17985: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12896146/HIVE-17985-branch-2.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10560 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[comments] (batchId=35) org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion02 (batchId=263) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdate.testNonAcidToAcidConversion02 (batchId=275) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion02 (batchId=272) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7658/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7658/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7658/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12896146 - PreCommit-HIVE-Build > When check the partitions size in the partitioned table, it will throw > NullPointerException > > > Key: HIVE-17985 > URL: https://issues.apache.org/jira/browse/HIVE-17985 > Project: Hive > Issue Type: Bug > Components: Parser, Physical Optimizer >Affects Versions: 1.2.2, 2.3.0, 3.0.0 >Reporter: wan kun >Assignee: wan kun > Fix For: 3.0.0 > > Attachments: HIVE-17985-branch-1.2.patch, > HIVE-17985-branch-2.3.patch, HIVE-17985.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > When the hive.limit.query.max.table.partition parameter is set, the > SemanticAnalyzer will throw NullPointerException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17985) When check the partitions size in the partitioned table, it will throw NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-17985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239985#comment-16239985 ] wan kun commented on HIVE-17985: I have just fix a NullPointerException,But the build have some unexpected error which I never meet before . Could [~thejas] or [~gopalv] take a look? Many thanks! > When check the partitions size in the partitioned table, it will throw > NullPointerException > > > Key: HIVE-17985 > URL: https://issues.apache.org/jira/browse/HIVE-17985 > Project: Hive > Issue Type: Bug > Components: Parser, Physical Optimizer >Affects Versions: 1.2.2, 2.3.0, 3.0.0 >Reporter: wan kun >Assignee: wan kun > Fix For: 3.0.0 > > Attachments: HIVE-17985-branch-1.2.patch, > HIVE-17985-branch-2.3.patch, HIVE-17985.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > When the hive.limit.query.max.table.partition parameter is set, the > SemanticAnalyzer will throw NullPointerException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17964) HoS: some spark configs doesn't require re-creating a session
[ https://issues.apache.org/jira/browse/HIVE-17964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239980#comment-16239980 ] Rui Li commented on HIVE-17964: --- I think renaming a bunch of configs is not very user friendly. Maybe we should differentiate these configs in our code. > HoS: some spark configs doesn't require re-creating a session > - > > Key: HIVE-17964 > URL: https://issues.apache.org/jira/browse/HIVE-17964 > Project: Hive > Issue Type: Improvement >Reporter: Rui Li >Priority: Minor > > I guess the {{hive.spark.}} configs were initially intended for the RSC. > Therefore when they're changed, we'll re-create the session for them to take > effect. There're some configs not related to RSC that also start with > {{hive.spark.}}. We'd better rename them so that we don't unnecessarily > re-create sessions, which is usually time consuming. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17945) Support column projection for index access when using Parquet Vectorization
[ https://issues.apache.org/jira/browse/HIVE-17945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-17945: Resolution: Fixed Fix Version/s: 2.4.0 3.0.0 Status: Resolved (was: Patch Available) > Support column projection for index access when using Parquet Vectorization > --- > > Key: HIVE-17945 > URL: https://issues.apache.org/jira/browse/HIVE-17945 > Project: Hive > Issue Type: Sub-task >Reporter: Ferdinand Xu >Assignee: Ferdinand Xu > Fix For: 3.0.0, 2.4.0 > > Attachments: HIVE-17945-barnch-2.patch, HIVE-17945.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17985) When check the partitions size in the partitioned table, it will throw NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-17985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239957#comment-16239957 ] Hive QA commented on HIVE-17985: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12896146/HIVE-17985-branch-2.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10560 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[comments] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=142) org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion02 (batchId=263) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdate.testNonAcidToAcidConversion02 (batchId=275) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion02 (batchId=272) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7657/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7657/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7657/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12896146 - PreCommit-HIVE-Build > When check the partitions size in the partitioned table, it will throw > NullPointerException > > > Key: HIVE-17985 > URL: https://issues.apache.org/jira/browse/HIVE-17985 > Project: Hive > Issue Type: Bug > Components: Parser, Physical Optimizer >Affects Versions: 1.2.2, 2.3.0, 3.0.0 >Reporter: wan kun >Assignee: wan kun > Fix For: 3.0.0 > > Attachments: HIVE-17985-branch-1.2.patch, > HIVE-17985-branch-2.3.patch, HIVE-17985.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > When the hive.limit.query.max.table.partition parameter is set, the > SemanticAnalyzer will throw NullPointerException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17678) appendPartition in HiveMetaStoreClient does not conform to the IMetaStoreClient.
[ https://issues.apache.org/jira/browse/HIVE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nithish updated HIVE-17678: --- Attachment: HIVE-17678.1.patch > appendPartition in HiveMetaStoreClient does not conform to the > IMetaStoreClient. > > > Key: HIVE-17678 > URL: https://issues.apache.org/jira/browse/HIVE-17678 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 1.1.0 >Reporter: Nithish >Assignee: Nithish > Labels: Metastore > Fix For: 1.1.0, 2.0.0 > > Attachments: HIVE-17678.1.patch > > > {code:java} > Partition appendPartition(String dbName, String tableName, String partName) > {code} > in HiveMetaStoreClient does not conform with the declaration > {code:java} > Partition appendPartition(String tableName, String dbName, String name) > {code} > in IMetaStoreClient > *Positions for dbName and tableName are interchanged.* -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17974) If the job resource jar already exists in the HDFS fileSystem, do not upload!
[ https://issues.apache.org/jira/browse/HIVE-17974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wan kun updated HIVE-17974: --- Attachment: HIVE-17974.2-branch-1.2.patch > If the job resource jar already exists in the HDFS fileSystem, do not upload! > - > > Key: HIVE-17974 > URL: https://issues.apache.org/jira/browse/HIVE-17974 > Project: Hive > Issue Type: Improvement > Components: Hive, Query Processor, Tez >Affects Versions: 1.2.2 >Reporter: wan kun >Assignee: wan kun >Priority: Minor > Fix For: 1.2.3 > > Attachments: HIVE-17974-branch-1.2.patch, > HIVE-17974.2-branch-1.2.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > For Mr or tez application, if the jar resources already exists on the HDFS, > the application will still upload the jars to the HDFS when it starts.I thind > this is not need. > So, if the original resource file is already on HDFS,I will record it ,and > when the application starts, it will use the original file on HDFS. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-8534) sql std auth : update configuration whitelist for 0.14
[ https://issues.apache.org/jira/browse/HIVE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239938#comment-16239938 ] Lefty Leverenz commented on HIVE-8534: -- [~ajisakaa] finished the documentation and updated the descriptions (HIVE-8937), so I removed the TODOC14 label. * [hive.security.authorization.sqlstd.confwhitelist | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.security.authorization.sqlstd.confwhitelist] * [hive.security.authorization.sqlstd.confwhitelist.append | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.security.authorization.sqlstd.confwhitelist.append] > sql std auth : update configuration whitelist for 0.14 > -- > > Key: HIVE-8534 > URL: https://issues.apache.org/jira/browse/HIVE-8534 > Project: Hive > Issue Type: Bug > Components: Authorization, SQLStandardAuthorization >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Blocker > Fix For: 0.14.0 > > Attachments: HIVE-8534.1.patch, HIVE-8534.2.patch, HIVE-8534.3.patch, > HIVE-8534.4.patch, HIVE-8534.5.patch > > > New config parameters have been introduced in hive 0.14. SQL standard > authorization needs to be updated to allow some new parameters to be set, > when the authorization mode is enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-8937) fix description of hive.security.authorization.sqlstd.confwhitelist.* params
[ https://issues.apache.org/jira/browse/HIVE-8937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239937#comment-16239937 ] Lefty Leverenz commented on HIVE-8937: -- Thanks for the documentation, Akira. I removed the TODOC labels. > fix description of hive.security.authorization.sqlstd.confwhitelist.* params > > > Key: HIVE-8937 > URL: https://issues.apache.org/jira/browse/HIVE-8937 > Project: Hive > Issue Type: Bug > Components: Documentation >Affects Versions: 0.14.0 >Reporter: Thejas M Nair >Assignee: Akira Ajisaka > Fix For: 3.0.0 > > Attachments: HIVE-8937.001.patch, HIVE-8937.002.patch > > > hive.security.authorization.sqlstd.confwhitelist.* param description in > HiveConf is incorrect. The expected value is a regex, not comma separated > regexes. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-8534) sql std auth : update configuration whitelist for 0.14
[ https://issues.apache.org/jira/browse/HIVE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lefty Leverenz updated HIVE-8534: - Labels: (was: TODOC14) > sql std auth : update configuration whitelist for 0.14 > -- > > Key: HIVE-8534 > URL: https://issues.apache.org/jira/browse/HIVE-8534 > Project: Hive > Issue Type: Bug > Components: Authorization, SQLStandardAuthorization >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Blocker > Fix For: 0.14.0 > > Attachments: HIVE-8534.1.patch, HIVE-8534.2.patch, HIVE-8534.3.patch, > HIVE-8534.4.patch, HIVE-8534.5.patch > > > New config parameters have been introduced in hive 0.14. SQL standard > authorization needs to be updated to allow some new parameters to be set, > when the authorization mode is enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-8937) fix description of hive.security.authorization.sqlstd.confwhitelist.* params
[ https://issues.apache.org/jira/browse/HIVE-8937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lefty Leverenz updated HIVE-8937: - Labels: (was: TODOC14 TODOC3.0) > fix description of hive.security.authorization.sqlstd.confwhitelist.* params > > > Key: HIVE-8937 > URL: https://issues.apache.org/jira/browse/HIVE-8937 > Project: Hive > Issue Type: Bug > Components: Documentation >Affects Versions: 0.14.0 >Reporter: Thejas M Nair >Assignee: Akira Ajisaka > Fix For: 3.0.0 > > Attachments: HIVE-8937.001.patch, HIVE-8937.002.patch > > > hive.security.authorization.sqlstd.confwhitelist.* param description in > HiveConf is incorrect. The expected value is a regex, not comma separated > regexes. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17528) Add more q-tests for Hive-on-Spark with Parquet vectorized reader
[ https://issues.apache.org/jira/browse/HIVE-17528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239935#comment-16239935 ] Ferdinand Xu commented on HIVE-17528: - Hi [~vihangk1], can you help me review it? Thank you! > Add more q-tests for Hive-on-Spark with Parquet vectorized reader > - > > Key: HIVE-17528 > URL: https://issues.apache.org/jira/browse/HIVE-17528 > Project: Hive > Issue Type: Sub-task >Reporter: Vihang Karajgaonkar >Assignee: Ferdinand Xu > Attachments: HIVE-17528.patch > > > Most of the vectorization related q-tests operate on ORC tables using Tez. It > would be good to add more coverage on a different combination of engine and > file-format. We can model existing q-tests using parquet tables and run it > using TestSparkCliDriver -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17973) Fix small bug in multi_insert_union_src.q
[ https://issues.apache.org/jira/browse/HIVE-17973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239927#comment-16239927 ] Hive QA commented on HIVE-17973: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12896143/HIVE-17973.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 11357 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=62) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] (batchId=146) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=156) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=102) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[ct_noperm_loc] (batchId=94) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] (batchId=111) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=206) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testAmPoolInteractions (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testApplyPlanQpChanges (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testApplyPlanUserMapping (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testAsyncSessionInitFailures (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testClusterFractions (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testDestroyAndReturn (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testQueueing (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testReopen (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testReuse (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testReuseWithDifferentPool (batchId=281) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testReuseWithQueueing (batchId=281) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=223) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7656/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7656/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7656/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 19 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12896143 - PreCommit-HIVE-Build > Fix small bug in multi_insert_union_src.q > - > > Key: HIVE-17973 > URL: https://issues.apache.org/jira/browse/HIVE-17973 > Project: Hive > Issue Type: Bug >Reporter: liyunzhang >Assignee: liyunzhang >Priority: Trivial > Attachments: HIVE-17973.2.patch, HIVE-17973.patch > > > in ql\src\test\queries\clientpositive\multi_insert_union_src.q, > There are two problems in the query file > 1. It is strange to drop src_multi1 twice > 2. {{src1}} is not created but used src1(Maybe we create src1 in other qfile) > {code} > set hive.mapred.mode=nonstrict; > drop table if exists src2; > drop table if exists src_multi1; > drop table if exists src_multi1; > set hive.stats.dbclass=fs; > CREATE TABLE src2 as SELECT * FROM src; > create table src_multi1 like src; > create table src_multi2 like src; > explain > from (select * from src1 where key < 10 union all select * from src2 where > key > 100) s > insert overwrite table src_multi1 select key, value where key < 150 order by > key > insert overwrite table src_multi2 select key, value where key > 400 order by > value; > from (select * from src1 where key < 10 union all select * from src2 where > key > 100) s > insert overwrite table src_multi1 select key, value where key < 150 order by > key > insert overwrite table src_multi2 select key, value where key > 400 order by > value; > select * from src_multi1; > select * from src_multi2; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17595) Correct DAG for updating the last.repl.id for a database during bootstrap load
[ https://issues.apache.org/jira/browse/HIVE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-17595: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master, Thanks for the review [~daijy] > Correct DAG for updating the last.repl.id for a database during bootstrap load > -- > > Key: HIVE-17595 > URL: https://issues.apache.org/jira/browse/HIVE-17595 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Attachments: HIVE-17595.0.patch, HIVE-17595.1.patch, > HIVE-17595.2.patch, HIVE-17595.3.patch, HIVE-17595.4.patch, HIVE-17595.5.patch > > > We update the last.repl.id as a database property. This is done after all the > bootstrap tasks to load the relevant data are done and is the last task to be > run. however we are currently not setting up the DAG correctly for this task. > This is getting added as the root task for now where as it should be the last > task to be run in a DAG. This becomes more important after the inclusion of > HIVE-17426 since this will lead to parallel execution and incorrect DAG's > will lead to incorrect results/state of the system. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17595) Correct DAG for updating the last.repl.id for a database during bootstrap load
[ https://issues.apache.org/jira/browse/HIVE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-17595: --- Attachment: HIVE-17595.5.patch > Correct DAG for updating the last.repl.id for a database during bootstrap load > -- > > Key: HIVE-17595 > URL: https://issues.apache.org/jira/browse/HIVE-17595 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Attachments: HIVE-17595.0.patch, HIVE-17595.1.patch, > HIVE-17595.2.patch, HIVE-17595.3.patch, HIVE-17595.4.patch, HIVE-17595.5.patch > > > We update the last.repl.id as a database property. This is done after all the > bootstrap tasks to load the relevant data are done and is the last task to be > run. however we are currently not setting up the DAG correctly for this task. > This is getting added as the root task for now where as it should be the last > task to be run in a DAG. This becomes more important after the inclusion of > HIVE-17426 since this will lead to parallel execution and incorrect DAG's > will lead to incorrect results/state of the system. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17595) Correct DAG for updating the last.repl.id for a database during bootstrap load
[ https://issues.apache.org/jira/browse/HIVE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-17595: --- Attachment: (was: HIVE-17595.5.patch) > Correct DAG for updating the last.repl.id for a database during bootstrap load > -- > > Key: HIVE-17595 > URL: https://issues.apache.org/jira/browse/HIVE-17595 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Attachments: HIVE-17595.0.patch, HIVE-17595.1.patch, > HIVE-17595.2.patch, HIVE-17595.3.patch, HIVE-17595.4.patch, HIVE-17595.5.patch > > > We update the last.repl.id as a database property. This is done after all the > bootstrap tasks to load the relevant data are done and is the last task to be > run. however we are currently not setting up the DAG correctly for this task. > This is getting added as the root task for now where as it should be the last > task to be run in a DAG. This becomes more important after the inclusion of > HIVE-17426 since this will lead to parallel execution and incorrect DAG's > will lead to incorrect results/state of the system. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17595) Correct DAG for updating the last.repl.id for a database during bootstrap load
[ https://issues.apache.org/jira/browse/HIVE-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-17595: --- Attachment: HIVE-17595.5.patch adding license headers > Correct DAG for updating the last.repl.id for a database during bootstrap load > -- > > Key: HIVE-17595 > URL: https://issues.apache.org/jira/browse/HIVE-17595 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Attachments: HIVE-17595.0.patch, HIVE-17595.1.patch, > HIVE-17595.2.patch, HIVE-17595.3.patch, HIVE-17595.4.patch, HIVE-17595.5.patch > > > We update the last.repl.id as a database property. This is done after all the > bootstrap tasks to load the relevant data are done and is the last task to be > run. however we are currently not setting up the DAG correctly for this task. > This is getting added as the root task for now where as it should be the last > task to be run in a DAG. This becomes more important after the inclusion of > HIVE-17426 since this will lead to parallel execution and incorrect DAG's > will lead to incorrect results/state of the system. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17948) Hive 2.3.2 Release Planning
[ https://issues.apache.org/jira/browse/HIVE-17948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239881#comment-16239881 ] Hive QA commented on HIVE-17948: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12895932/HIVE-17948.3-branch-2.3.patch {color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10569 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[comments] (batchId=35) org.apache.hadoop.hive.ql.TestTxnCommands2.testNonAcidToAcidConversion02 (batchId=263) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdate.testNonAcidToAcidConversion02 (batchId=275) org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion02 (batchId=272) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7655/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7655/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7655/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12895932 - PreCommit-HIVE-Build > Hive 2.3.2 Release Planning > --- > > Key: HIVE-17948 > URL: https://issues.apache.org/jira/browse/HIVE-17948 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.2 >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Fix For: 2.3.2 > > Attachments: HIVE-17948-branch-2.3.patch, > HIVE-17948.2-branch-2.3.patch, HIVE-17948.3-branch-2.3.patch > > > Release planning for Hive 2.3.2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17984) getMaxLength is not returning the previously set length in ORC file
[ https://issues.apache.org/jira/browse/HIVE-17984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Syam updated HIVE-17984: Description: getMaxLength is not returning the correct length for char/varchar datatypes. I see that getMaxLength is returning 255 for CHAR type and 65535 for VARCHAR type. When I checked the same file using orcfiledump utility, I could see the correct lengths. Here is the snippet of the code: Reader _reader = OrcFile.createReader(new Path(_fileName),OrcFile.readerOptions(conf).filesystem(fs)) ; TypeDescription metarec = _reader.getSchema() ; List cols = metarec.getChildren(); List colNames = metarec.getFieldNames(); for (int i=0; i < cols.size(); i++) { TypeDescription fieldSchema = cols.get(i); switch (fieldSchema.getCategory()) { case CHAR: header += "char(" + fieldSchema.getMaxLength() + ")" ; break; -- -- } } Please let me know your pointers please. was: getMaxLength is not returning the correct length for char/varchar datatypes. I see that getMaxLength is returning 255 for CHAR type and 65535 for VARCHAR type. When I checked the same file using orcfiledump utility, I could see the correct lengths. Here is the snippet the code: Reader _reader = OrcFile.createReader(new Path(_fileName),OrcFile.readerOptions(conf).filesystem(fs)) ; TypeDescription metarec = _reader.getSchema() ; List cols = metarec.getChildren(); List colNames = metarec.getFieldNames(); for (int i=0; i < cols.size(); i++) { TypeDescription fieldSchema = cols.get(i); switch (fieldSchema.getCategory()) { case CHAR: header += "char(" + fieldSchema.getMaxLength() + ")" ; break; -- -- } } Please let me know your pointers please. > getMaxLength is not returning the previously set length in ORC file > --- > > Key: HIVE-17984 > URL: https://issues.apache.org/jira/browse/HIVE-17984 > Project: Hive > Issue Type: Bug > Components: Hive, ORC > Environment: tested it against hive-exec 2.1 >Reporter: Syam > Original Estimate: 24h > Remaining Estimate: 24h > > getMaxLength is not returning the correct length for char/varchar datatypes. > I see that getMaxLength is returning 255 for CHAR type and 65535 for VARCHAR > type. > When I checked the same file using orcfiledump utility, I could see the > correct lengths. > Here is the snippet of the code: > Reader _reader = OrcFile.createReader(new > Path(_fileName),OrcFile.readerOptions(conf).filesystem(fs)) ; > TypeDescription metarec = _reader.getSchema() ; > List cols = metarec.getChildren(); > List colNames = metarec.getFieldNames(); > for (int i=0; i < cols.size(); i++) > { > TypeDescription fieldSchema = cols.get(i); > switch (fieldSchema.getCategory()) > { >case CHAR: > header += "char(" + fieldSchema.getMaxLength() + ")" ; > break; >-- > -- > } > } > Please let me know your pointers please. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17985) When check the partitions size in the partitioned table, it will throw NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-17985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wan kun updated HIVE-17985: --- Attachment: HIVE-17985-branch-2.3.patch > When check the partitions size in the partitioned table, it will throw > NullPointerException > > > Key: HIVE-17985 > URL: https://issues.apache.org/jira/browse/HIVE-17985 > Project: Hive > Issue Type: Bug > Components: Parser, Physical Optimizer >Affects Versions: 1.2.2, 2.3.0, 3.0.0 >Reporter: wan kun >Assignee: wan kun > Fix For: 3.0.0 > > Attachments: HIVE-17985-branch-1.2.patch, > HIVE-17985-branch-2.3.patch, HIVE-17985.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > When the hive.limit.query.max.table.partition parameter is set, the > SemanticAnalyzer will throw NullPointerException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17985) When check the partitions size in the partitioned table, it will throw NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-17985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wan kun updated HIVE-17985: --- Attachment: HIVE-17985-branch-1.2.patch > When check the partitions size in the partitioned table, it will throw > NullPointerException > > > Key: HIVE-17985 > URL: https://issues.apache.org/jira/browse/HIVE-17985 > Project: Hive > Issue Type: Bug > Components: Parser, Physical Optimizer >Affects Versions: 1.2.2, 2.3.0, 3.0.0 >Reporter: wan kun >Assignee: wan kun > Fix For: 3.0.0 > > Attachments: HIVE-17985-branch-1.2.patch, HIVE-17985.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > When the hive.limit.query.max.table.partition parameter is set, the > SemanticAnalyzer will throw NullPointerException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17985) When check the partitions size in the partitioned table, it will throw NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-17985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wan kun updated HIVE-17985: --- Attachment: HIVE-17985.patch > When check the partitions size in the partitioned table, it will throw > NullPointerException > > > Key: HIVE-17985 > URL: https://issues.apache.org/jira/browse/HIVE-17985 > Project: Hive > Issue Type: Bug > Components: Parser, Physical Optimizer >Affects Versions: 1.2.2, 2.3.0, 3.0.0 >Reporter: wan kun >Assignee: wan kun > Fix For: 3.0.0 > > Attachments: HIVE-17985.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > When the hive.limit.query.max.table.partition parameter is set, the > SemanticAnalyzer will throw NullPointerException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17985) When check the partitions size in the partitioned table, it will throw NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-17985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wan kun updated HIVE-17985: --- Status: Patch Available (was: Open) > When check the partitions size in the partitioned table, it will throw > NullPointerException > > > Key: HIVE-17985 > URL: https://issues.apache.org/jira/browse/HIVE-17985 > Project: Hive > Issue Type: Bug > Components: Parser, Physical Optimizer >Affects Versions: 2.3.0, 1.2.2, 3.0.0 >Reporter: wan kun >Assignee: wan kun > Fix For: 3.0.0 > > Original Estimate: 24h > Remaining Estimate: 24h > > When the hive.limit.query.max.table.partition parameter is set, the > SemanticAnalyzer will throw NullPointerException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17985) When check the partitions size in the partitioned table, it will throw NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-17985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wan kun reassigned HIVE-17985: -- > When check the partitions size in the partitioned table, it will throw > NullPointerException > > > Key: HIVE-17985 > URL: https://issues.apache.org/jira/browse/HIVE-17985 > Project: Hive > Issue Type: Bug > Components: Parser, Physical Optimizer >Affects Versions: 2.3.0, 1.2.2, 3.0.0 >Reporter: wan kun >Assignee: wan kun > Fix For: 3.0.0 > > Original Estimate: 24h > Remaining Estimate: 24h > > When the hive.limit.query.max.table.partition parameter is set, the > SemanticAnalyzer will throw NullPointerException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17973) Fix small bug in multi_insert_union_src.q
[ https://issues.apache.org/jira/browse/HIVE-17973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liyunzhang updated HIVE-17973: -- Attachment: HIVE-17973.2.patch update the patch, need also modify $HIVE_SOURCE/ql/src/test/results/clientpositive/multi_insert_union_src.q.out > Fix small bug in multi_insert_union_src.q > - > > Key: HIVE-17973 > URL: https://issues.apache.org/jira/browse/HIVE-17973 > Project: Hive > Issue Type: Bug >Reporter: liyunzhang >Assignee: liyunzhang >Priority: Trivial > Attachments: HIVE-17973.2.patch, HIVE-17973.patch > > > in ql\src\test\queries\clientpositive\multi_insert_union_src.q, > There are two problems in the query file > 1. It is strange to drop src_multi1 twice > 2. {{src1}} is not created but used src1(Maybe we create src1 in other qfile) > {code} > set hive.mapred.mode=nonstrict; > drop table if exists src2; > drop table if exists src_multi1; > drop table if exists src_multi1; > set hive.stats.dbclass=fs; > CREATE TABLE src2 as SELECT * FROM src; > create table src_multi1 like src; > create table src_multi2 like src; > explain > from (select * from src1 where key < 10 union all select * from src2 where > key > 100) s > insert overwrite table src_multi1 select key, value where key < 150 order by > key > insert overwrite table src_multi2 select key, value where key > 400 order by > value; > from (select * from src1 where key < 10 union all select * from src2 where > key > 100) s > insert overwrite table src_multi1 select key, value where key < 150 order by > key > insert overwrite table src_multi2 select key, value where key > 400 order by > value; > select * from src_multi1; > select * from src_multi2; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17877) HoS: combine equivalent DPP sink works
[ https://issues.apache.org/jira/browse/HIVE-17877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HIVE-17877: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks Sahil for the review. > HoS: combine equivalent DPP sink works > -- > > Key: HIVE-17877 > URL: https://issues.apache.org/jira/browse/HIVE-17877 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Rui Li >Assignee: Rui Li > Fix For: 3.0.0 > > Attachments: HIVE-17877.1.patch, HIVE-17877.2.patch > > > Suppose part1 and part2 are partitioned tables. The simplest use case should > be something like: > {code} > explain > select * from > (select part1.key, part1.value from part1 join src on part1.p=src.key) a > union all > (select part2.key, part2.value from part2 join src on part2.p=src.key); > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17948) Hive 2.3.2 Release Planning
[ https://issues.apache.org/jira/browse/HIVE-17948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239807#comment-16239807 ] Sergio Peña commented on HIVE-17948: [~stakiar] I took a look at the test failures and found that: {noformat} TestHCatClient.testTransportFailure {noformat} HIVE-16312 fixes test failure. I cherry-picked and pushed to branch-2.3. {noformat}TestCliDriver.testCliDriver[comments]{noformat} This is a flaky test. It fails on Jenkins because an extra space on the comments.q.out, but I cannot reproduce it locally to create a patch. This is just flaky, we can ignore it. {noformat} TestTxnCommands2.testNonAcidToAcidConversion02 TestTxnCommands2WithSplitUpdate.testNonAcidToAcidConversion02 TestTxnCommands2WithSplitUpdateAndVectorization.testNonAcidToAcidConversion02 {noformat} These tests fail since 2.3.1 because the following commit: HIVE-17562: ACID 1.0 + ETL strategy should treat empty compacted files as uncovered deltas (Prasanth Jayachandran reviewed by Eugene Koifman) {noformat}TestSparkCliDriver.org.apache.hadoop.hive.cli.TestSparkCliDriver{noformat} This test is weird. It fails on the createSources part of the TestSparkCliDriver but only once and only in Jenkins. I run some .q spark tests but I does not fail with me. This a new test failure due to the commits you mentioned on this JIRA, but I'm not sure how to reproduce it locally. > Hive 2.3.2 Release Planning > --- > > Key: HIVE-17948 > URL: https://issues.apache.org/jira/browse/HIVE-17948 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.2 >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Fix For: 2.3.2 > > Attachments: HIVE-17948-branch-2.3.patch, > HIVE-17948.2-branch-2.3.patch, HIVE-17948.3-branch-2.3.patch > > > Release planning for Hive 2.3.2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-16312) Flaky test: TestHCatClient.testTransportFailure
[ https://issues.apache.org/jira/browse/HIVE-16312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-16312: --- Fix Version/s: 2.3.2 > Flaky test: TestHCatClient.testTransportFailure > --- > > Key: HIVE-16312 > URL: https://issues.apache.org/jira/browse/HIVE-16312 > Project: Hive > Issue Type: Sub-task >Reporter: Barna Zsombor Klara >Assignee: Barna Zsombor Klara > Fix For: 3.0.0, 2.3.2 > > Attachments: HIVE-16312.01-branch-2.3.patch, HIVE-16312.01.patch > > > The test has been failing consistently for 10+ builds. > Error message: > {code} > Error Message > The expected exception was never thrown. > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17911) org.apache.hadoop.hive.metastore.ObjectStore - Tune Up
[ https://issues.apache.org/jira/browse/HIVE-17911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239662#comment-16239662 ] Hive QA commented on HIVE-17911: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12896133/HIVE-17911.3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7654/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7654/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7654/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-11-05 18:03:40.725 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-7654/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-11-05 18:03:40.728 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 6c87136..49f6814 master -> origin/master + git reset --hard HEAD HEAD is now at 6c87136 HIVE-17932 : Remove option to control partition level basic stats fetching (Zoltan Haindrich via Ashutosh Chauhan) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 49f6814 HIVE-17962 : org.apache.hadoop.hive.metastore.security.MemoryTokenStore - Parameterize Logging (Beluga Behr via Aihua Xu) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-11-05 18:03:46.226 + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch Going to apply patch with: patch -p1 patching file standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java Hunk #2 succeeded at 218 (offset -3 lines). Hunk #3 succeeded at 247 (offset -3 lines). Hunk #4 succeeded at 337 (offset -3 lines). Hunk #5 succeeded at 384 (offset -3 lines). Hunk #6 succeeded at 455 (offset -3 lines). Hunk #7 succeeded at 520 (offset -3 lines). Hunk #8 succeeded at 539 (offset -3 lines). Hunk #9 succeeded at 568 (offset -3 lines). Hunk #10 succeeded at 579 (offset -3 lines). Hunk #11 succeeded at 631 (offset -3 lines). Hunk #12 succeeded at 661 (offset -3 lines). Hunk #13 succeeded at 828 (offset -3 lines). Hunk #14 succeeded at 867 (offset -3 lines). Hunk #15 succeeded at 913 (offset -3 lines). Hunk #16 succeeded at 924 (offset -3 lines). Hunk #17 succeeded at 1072 (offset -3 lines). Hunk #18 succeeded at 1180 (offset -3 lines). Hunk #19 succeeded at 1379 (offset -3 lines). Hunk #20 succeeded at 1505 (offset -3 lines). Hunk #21 succeeded at 1847 (offset -3 lines). Hunk #22 succeeded at 1970 (offset -3 lines). Hunk #23 succeeded at 2028 (offset -3 lines). Hunk #24 succeeded at 2133 (offset -3 lines). Hunk #25 succeeded at 2185 (offset -3 lines). Hunk #26 succeeded at 2193 (offset -3 lines). Hunk #27 succeeded at 2258 (offset -3 lines). Hunk #28 succeeded at 2408 (offset -3 lines). Hunk #29 succeeded at 2432 (offset -3 lines). Hunk #30 succeeded at 2440 (offset -3 lines). Hunk #31 succeeded at 2469 (offset -3 lines). Hunk #32 succeeded at 2514 (offset -3 lines). Hunk #33 succeeded at 2536 (offset -3 lines). Hunk #34 succeeded at 2681 (offset -3 lines). Hunk #35 succeeded at 2763 (offset -3 lines). Hunk #36 succeeded at 2861 (offset -3 lines). Hunk #37 succeeded at 2935 (offset -3 lines). Hunk #38 succeeded at 2959 (offset -3 lines). Hunk #39 succeeded at 2998 (offset -3 lines). Hunk #40 succeeded at 3009 (offset -3 lines). Hunk #41 succeeded at 3259 (offset -3 lines). Hunk #42 succeeded at 3339 (offset -3 lines
[jira] [Commented] (HIVE-16855) org.apache.hadoop.hive.ql.exec.mr.HashTableLoader Improvements
[ https://issues.apache.org/jira/browse/HIVE-16855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239656#comment-16239656 ] BELUGA BEHR commented on HIVE-16855: [~aihuaxu] :) > org.apache.hadoop.hive.ql.exec.mr.HashTableLoader Improvements > -- > > Key: HIVE-16855 > URL: https://issues.apache.org/jira/browse/HIVE-16855 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.1.1, 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-16855.1.patch > > > # Improve (Simplify) Logging > # Remove custom buffer size for {{BufferedInputStream}} and instead rely on > JVM default which is often larger these days (8192) > # Simplify looping logic -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-16736) General Improvements to BufferedRows
[ https://issues.apache.org/jira/browse/HIVE-16736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239655#comment-16239655 ] BELUGA BEHR commented on HIVE-16736: [~ngangam] :) > General Improvements to BufferedRows > > > Key: HIVE-16736 > URL: https://issues.apache.org/jira/browse/HIVE-16736 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-16736.1.patch, HIVE-16736.1.patch > > > General improvements for {{BufferedRows.java}}. Use {{ArrayList}} instead of > {{LinkedList}} to conserve memory for large data sets, prevent having to loop > through the entire data set twice in {{normalizeWidths}} method, some > simplifications. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17911) org.apache.hadoop.hive.metastore.ObjectStore - Tune Up
[ https://issues.apache.org/jira/browse/HIVE-17911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-17911: --- Status: Open (was: Patch Available) > org.apache.hadoop.hive.metastore.ObjectStore - Tune Up > -- > > Key: HIVE-17911 > URL: https://issues.apache.org/jira/browse/HIVE-17911 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-17911.1.patch, HIVE-17911.2.patch, > HIVE-17911.3.patch > > > # Remove unused variables > # Add logging parameterization > # Use CollectionUtils.isEmpty/isNotEmpty to simplify and unify collection > empty check (and always use null check) > # Minor tweaks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17911) org.apache.hadoop.hive.metastore.ObjectStore - Tune Up
[ https://issues.apache.org/jira/browse/HIVE-17911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-17911: --- Attachment: HIVE-17911.3.patch Trying to get build to kickoff again... > org.apache.hadoop.hive.metastore.ObjectStore - Tune Up > -- > > Key: HIVE-17911 > URL: https://issues.apache.org/jira/browse/HIVE-17911 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-17911.1.patch, HIVE-17911.2.patch, > HIVE-17911.3.patch > > > # Remove unused variables > # Add logging parameterization > # Use CollectionUtils.isEmpty/isNotEmpty to simplify and unify collection > empty check (and always use null check) > # Minor tweaks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17911) org.apache.hadoop.hive.metastore.ObjectStore - Tune Up
[ https://issues.apache.org/jira/browse/HIVE-17911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-17911: --- Status: Patch Available (was: Open) > org.apache.hadoop.hive.metastore.ObjectStore - Tune Up > -- > > Key: HIVE-17911 > URL: https://issues.apache.org/jira/browse/HIVE-17911 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-17911.1.patch, HIVE-17911.2.patch, > HIVE-17911.3.patch > > > # Remove unused variables > # Add logging parameterization > # Use CollectionUtils.isEmpty/isNotEmpty to simplify and unify collection > empty check (and always use null check) > # Minor tweaks -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17966) org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveArrayInspector - Review
[ https://issues.apache.org/jira/browse/HIVE-17966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239653#comment-16239653 ] BELUGA BEHR commented on HIVE-17966: Failures are unrelated, I see the same things failing in another, unrelated, patch > org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveArrayInspector - Review > - > > Key: HIVE-17966 > URL: https://issues.apache.org/jira/browse/HIVE-17966 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Serializers/Deserializers >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 3.0.0, 2.3.2 > > Attachments: HIVE-17966.1.patch, HIVE-17966.2.patch > > > * Simplify > * Make faster - perform bulk operations instead of iterating > * Remove compilation warnings -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17962) org.apache.hadoop.hive.metastore.security.MemoryTokenStore - Parameterize Logging
[ https://issues.apache.org/jira/browse/HIVE-17962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-17962: Resolution: Fixed Assignee: BELUGA BEHR Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Beluga! > org.apache.hadoop.hive.metastore.security.MemoryTokenStore - Parameterize > Logging > - > > Key: HIVE-17962 > URL: https://issues.apache.org/jira/browse/HIVE-17962 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 3.0.0 > > Attachments: HIVE-17962.1.patch > > > * Parameterize logging > * Small simplification -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17963) Fix for HIVE-17113 can be improved for non-blobstore filesystems
[ https://issues.apache.org/jira/browse/HIVE-17963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239649#comment-16239649 ] Jason Dere commented on HIVE-17963: --- [~ashutoshc] [~owen.omalley] can you review? > Fix for HIVE-17113 can be improved for non-blobstore filesystems > > > Key: HIVE-17963 > URL: https://issues.apache.org/jira/browse/HIVE-17963 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-17963.1.patch, HIVE-17963.2.patch > > > HIVE-17113/HIVE-17813 fix the duplicate file issue by performing file moves > on a file-by-file basis. For non-blobstore filesystems this results in many > more filesystem/namenode operations compared to the previous > Utilities.mvFileToFinalPath() behavior (dedup files in src dir, rename src > dir to final dir). > For non-blobstore filesystems, a better solution would be the one described > [here|https://issues.apache.org/jira/browse/HIVE-17113?focusedCommentId=16100564&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16100564]: > 1) Move the temp directory to a new directory name, to prevent additional > files from being added by any runaway processes. > 2) Run removeTempOrDuplicateFiles() on this renamed temp directory > 3) Run renameOrMoveFiles() to move the renamed temp directory to the final > location. > This results in only one additional file operation in non-blobstore FSes > compared to the original Utilities.mvFileToFinalPath() behavior. > The proposal is to do away with the config setting > hive.exec.move.files.from.source.dir and always have behavior that should > take care of the duplicate file issue described in HIVE-17113. For > non-blobstore filesystems we will do steps 1-3 described above. For blobstore > filesystems we will do the solution done in HIVE-17113/HIVE-17813 which does > the file-by-file copy - this should have the same number of file operations > as doing a rename directory on blobstore, which effectively results in file > moves on a file-by-file basis. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17947) Concurrent inserts might fail for ACID table since HIVE-17526 on branch-1
[ https://issues.apache.org/jira/browse/HIVE-17947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16239636#comment-16239636 ] Daniel Voros commented on HIVE-17947: - All failed tests either pass locally or fail in the same way without the patch. > Concurrent inserts might fail for ACID table since HIVE-17526 on branch-1 > - > > Key: HIVE-17947 > URL: https://issues.apache.org/jira/browse/HIVE-17947 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.3.0 >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Blocker > Attachments: HIVE-17947.1-branch-1.patch, > HIVE-17947.2-branch-1.patch, HIVE-17947.3-branch-1.patch > > > HIVE-17526 (only on branch-1) disabled conversion to ACID if there are > *_copy_N files under the table, but the filesystem checks introduced there > are running for every insert since the MoveTask in the end of the insert will > call alterTable eventually. > The filename checking also recurses into staging directories created by other > inserts. If those are removed while listing the files, it leads to the > following exception and failing insert: > {code} > java.io.FileNotFoundException: File > hdfs://mycluster/apps/hive/warehouse/dvoros.db/concurrent_insert/.hive-staging_hive_2017-10-30_13-23-35_056_2844419018556002410-2/-ext-10001 > does not exist. > at > org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1081) > ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?] > at > org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1059) > ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?] > at > org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1004) > ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?] > at > org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1000) > ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?] > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > ~[hadoop-common-2.7.3.2.6.3.0-235.jar:?] > at > org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:1018) > ~[hadoop-hdfs-2.7.3.2.6.3.0-235.jar:?] > at > org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1735) > ~[hadoop-common-2.7.3.2.6.3.0-235.jar:?] > at > org.apache.hadoop.fs.FileSystem$6.handleFileStat(FileSystem.java:1864) > ~[hadoop-common-2.7.3.2.6.3.0-235.jar:?] > at org.apache.hadoop.fs.FileSystem$6.hasNext(FileSystem.java:1841) > ~[hadoop-common-2.7.3.2.6.3.0-235.jar:?] > at > org.apache.hadoop.hive.metastore.TransactionalValidationListener.containsCopyNFiles(TransactionalValidationListener.java:226) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.TransactionalValidationListener.handleAlterTableTransactionalProp(TransactionalValidationListener.java:104) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.TransactionalValidationListener.handle(TransactionalValidationListener.java:63) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.TransactionalValidationListener.onEvent(TransactionalValidationListener.java:55) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2478) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_core(HiveMetaStore.java:4145) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_with_environment_context(HiveMetaStore.java:4117) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source) > ~[?:?] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_144] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_144] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > com.sun.proxy.$Proxy32.alter_table_with_environment_context(Unknown Source) > [?:?] > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table_with_environme