[jira] [Updated] (HIVE-18449) Add configurable policy for choosing the HMS URI from hive.metastore.uris
[ https://issues.apache.org/jira/browse/HIVE-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18449: --- Attachment: HIVE-18449.1.patch > Add configurable policy for choosing the HMS URI from hive.metastore.uris > - > > Key: HIVE-18449 > URL: https://issues.apache.org/jira/browse/HIVE-18449 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Sahil Takiar >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-18449.1.patch > > > HIVE-10815 added logic to randomly choose a HMS URI from > {{hive.metastore.uris}}. It would be nice if there was a configurable policy > that determined how a URI is chosen from this list - e.g. one option can be > to randomly pick a URI, another option can be to choose the first URI in the > list (which was the behavior prior to HIVE-10815). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-16886) HMS log notifications may have duplicated event IDs if multiple HMS are running concurrently
[ https://issues.apache.org/jira/browse/HIVE-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343970#comment-16343970 ] Alexander Kolbasov edited comment on HIVE-16886 at 1/29/18 8:36 PM: [~anishek] [~LinaAtAustin] The documentation isn't telling the whole story. Actual MSSQLServerAdapter does implement it. Looking at getSelectStatement() we see: {code:java} if (lock && dba.supportsOption(DatastoreAdapter.LOCK_OPTION_PLACED_AFTER_FROM)) { sql.append(" WITH ").append(dba.getSelectWithLockOption()); } {code} Now, looking at MSSQLServerAdapter, it has: {code:java} public String getSelectWithLockOption() { return "(UPDLOCK, ROWLOCK)"; } {code} So it is implemented for MS SQL Server as well by Data Nucleus. And MS SQL adapter does define {code:java} LOCK_OPTION_PLACED_AFTER_FROM{code} was (Author: akolb): [~anishek] [~LinaAtAustin] The documentation isn't telling the whole story. Actual MSSQLServerAdapter does implement it. Looking at getSelectStatement() we see: {code:java} if (lock && dba.supportsOption(DatastoreAdapter.LOCK_OPTION_PLACED_AFTER_FROM)) { sql.append(" WITH ").append(dba.getSelectWithLockOption()); } {code} Now, looking at MSSQLServerAdapter, it has: {code:java} public String getSelectWithLockOption() { return "(UPDLOCK, ROWLOCK)"; } {code} So it is implemented for MS SQL Server as well by Data Nucleus. > HMS log notifications may have duplicated event IDs if multiple HMS are > running concurrently > > > Key: HIVE-16886 > URL: https://issues.apache.org/jira/browse/HIVE-16886 > Project: Hive > Issue Type: Bug > Components: Hive, Metastore >Affects Versions: 3.0.0, 2.3.2, 2.3.3 >Reporter: Sergio Peña >Assignee: anishek >Priority: Major > Labels: TODOC3.0 > Fix For: 3.0.0 > > Attachments: HIVE-16886.1.patch, HIVE-16886.2.patch, > HIVE-16886.3.patch, HIVE-16886.4.patch, HIVE-16886.5.patch, > HIVE-16886.6.patch, HIVE-16886.7.patch, HIVE-16886.8.patch, > datastore-identity-holes.diff > > > When running multiple Hive Metastore servers and DB notifications are > enabled, I could see that notifications can be persisted with a duplicated > event ID. > This does not happen when running multiple threads in a single HMS node due > to the locking acquired on the DbNotificationsLog class, but multiple HMS > could cause conflicts. > The issue is in the ObjectStore#addNotificationEvent() method. The event ID > fetched from the datastore is used for the new notification, incremented in > the server itself, then persisted or updated back to the datastore. If 2 > servers read the same ID, then these 2 servers write a new notification with > the same ID. > The event ID is not unique nor a primary key. > Here's a test case using the TestObjectStore class that confirms this issue: > {noformat} > @Test > public void testConcurrentAddNotifications() throws ExecutionException, > InterruptedException { > final int NUM_THREADS = 2; > CountDownLatch countIn = new CountDownLatch(NUM_THREADS); > CountDownLatch countOut = new CountDownLatch(1); > HiveConf conf = new HiveConf(); > conf.setVar(HiveConf.ConfVars.METASTORE_EXPRESSION_PROXY_CLASS, > MockPartitionExpressionProxy.class.getName()); > ExecutorService executorService = > Executors.newFixedThreadPool(NUM_THREADS); > FutureTask tasks[] = new FutureTask[NUM_THREADS]; > for (int i=0; i final int n = i; > tasks[i] = new FutureTask(new Callable() { > @Override > public Void call() throws Exception { > ObjectStore store = new ObjectStore(); > store.setConf(conf); > NotificationEvent dbEvent = > new NotificationEvent(0, 0, > EventMessage.EventType.CREATE_DATABASE.toString(), "CREATE DATABASE DB" + n); > System.out.println("ADDING NOTIFICATION"); > countIn.countDown(); > countOut.await(); > store.addNotificationEvent(dbEvent); > System.out.println("FINISH NOTIFICATION"); > return null; > } > }); > executorService.execute(tasks[i]); > } > countIn.await(); > countOut.countDown(); > for (int i = 0; i < NUM_THREADS; ++i) { > tasks[i].get(); > } > NotificationEventResponse eventResponse = > objectStore.getNextNotification(new NotificationEventRequest()); > Assert.assertEquals(2, eventResponse.getEventsSize()); > Assert.assertEquals(1, eventResponse.getEvents().get(0).getEventId()); > // This fails because the next notification has an event ID = 1 > Assert.assertEquals(2, eventResponse.getEvents().get(1).getEventId()); > } > {noformat} > The last asser
[jira] [Commented] (HIVE-16886) HMS log notifications may have duplicated event IDs if multiple HMS are running concurrently
[ https://issues.apache.org/jira/browse/HIVE-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343970#comment-16343970 ] Alexander Kolbasov commented on HIVE-16886: --- [~anishek] [~LinaAtAustin] The documentation isn't telling the whole story. Actual MSSQLServerAdapter does implement it. Looking at getSelectStatement() we see: {code:java} if (lock && dba.supportsOption(DatastoreAdapter.LOCK_OPTION_PLACED_AFTER_FROM)) { sql.append(" WITH ").append(dba.getSelectWithLockOption()); } {code} Now, looking at MSSQLServerAdapter, it has: {code:java} public String getSelectWithLockOption() { return "(UPDLOCK, ROWLOCK)"; } {code} So it is implemented for MS SQL Server as well by Data Nucleus. > HMS log notifications may have duplicated event IDs if multiple HMS are > running concurrently > > > Key: HIVE-16886 > URL: https://issues.apache.org/jira/browse/HIVE-16886 > Project: Hive > Issue Type: Bug > Components: Hive, Metastore >Affects Versions: 3.0.0, 2.3.2, 2.3.3 >Reporter: Sergio Peña >Assignee: anishek >Priority: Major > Labels: TODOC3.0 > Fix For: 3.0.0 > > Attachments: HIVE-16886.1.patch, HIVE-16886.2.patch, > HIVE-16886.3.patch, HIVE-16886.4.patch, HIVE-16886.5.patch, > HIVE-16886.6.patch, HIVE-16886.7.patch, HIVE-16886.8.patch, > datastore-identity-holes.diff > > > When running multiple Hive Metastore servers and DB notifications are > enabled, I could see that notifications can be persisted with a duplicated > event ID. > This does not happen when running multiple threads in a single HMS node due > to the locking acquired on the DbNotificationsLog class, but multiple HMS > could cause conflicts. > The issue is in the ObjectStore#addNotificationEvent() method. The event ID > fetched from the datastore is used for the new notification, incremented in > the server itself, then persisted or updated back to the datastore. If 2 > servers read the same ID, then these 2 servers write a new notification with > the same ID. > The event ID is not unique nor a primary key. > Here's a test case using the TestObjectStore class that confirms this issue: > {noformat} > @Test > public void testConcurrentAddNotifications() throws ExecutionException, > InterruptedException { > final int NUM_THREADS = 2; > CountDownLatch countIn = new CountDownLatch(NUM_THREADS); > CountDownLatch countOut = new CountDownLatch(1); > HiveConf conf = new HiveConf(); > conf.setVar(HiveConf.ConfVars.METASTORE_EXPRESSION_PROXY_CLASS, > MockPartitionExpressionProxy.class.getName()); > ExecutorService executorService = > Executors.newFixedThreadPool(NUM_THREADS); > FutureTask tasks[] = new FutureTask[NUM_THREADS]; > for (int i=0; i final int n = i; > tasks[i] = new FutureTask(new Callable() { > @Override > public Void call() throws Exception { > ObjectStore store = new ObjectStore(); > store.setConf(conf); > NotificationEvent dbEvent = > new NotificationEvent(0, 0, > EventMessage.EventType.CREATE_DATABASE.toString(), "CREATE DATABASE DB" + n); > System.out.println("ADDING NOTIFICATION"); > countIn.countDown(); > countOut.await(); > store.addNotificationEvent(dbEvent); > System.out.println("FINISH NOTIFICATION"); > return null; > } > }); > executorService.execute(tasks[i]); > } > countIn.await(); > countOut.countDown(); > for (int i = 0; i < NUM_THREADS; ++i) { > tasks[i].get(); > } > NotificationEventResponse eventResponse = > objectStore.getNextNotification(new NotificationEventRequest()); > Assert.assertEquals(2, eventResponse.getEventsSize()); > Assert.assertEquals(1, eventResponse.getEvents().get(0).getEventId()); > // This fails because the next notification has an event ID = 1 > Assert.assertEquals(2, eventResponse.getEvents().get(1).getEventId()); > } > {noformat} > The last assertion fails expecting an event ID 1 instead of 2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18571) stats issue for MM tables
[ https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-18571: --- > stats issue for MM tables > - > > Key: HIVE-18571 > URL: https://issues.apache.org/jira/browse/HIVE-18571 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > > There are multiple stats aggregation issues with MM tables. > Some simple stats are double counted and some stats (simple stats) are > invalid for ACID table dirs altogether. > I have a patch almost ready, need to fix some more stuff and clean up. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343914#comment-16343914 ] Hive QA commented on HIVE-18472: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12908171/HIVE-18472.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 12792 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes] (batchId=163) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8911/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8911/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8911/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 22 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12908171 - PreCommit-HIVE-Build > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18472.1.patch, HIVE-18472.2.patch, > HIVE-18472.3.patch > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18570) ACID IOW implemented using base may delete too much data
[ https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18570: Description: Suppose we have a table with delta_0 insert data. Txn 1 starts an insert into delta_1. Txn 2 starts an IOW into base_2. Txn 2 commits. Txn 1 commits after txn 2 but its results would be invisible. If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID semantics), it seems to me this sequence of events is only possible under read-uncommitted isolation level (so, 2 deletes rows written by 1). Under any other isolation level rows written by 1 must survive, or there must be some lock based change in sequence or conflict. Update: to clarify, if 1 ran an update on rows instead of an insert, and 2 still ran an IOW/delete, row lock conflict (or equivalent) should cause one of them to fail. was: Suppose we have a table with delta_0 insert data. Txn 1 starts an insert into delta_1. Txn 2 starts an IOW into base_2. Txn 2 commits. Txn 1 commits after txn 2 but its results would be invisible. If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID semantics), it seems to me this sequence of events is only possible under read-uncommitted isolation level (so, 2 deletes rows written by 1). Under any other isolation level rows written by 1 must survive, or there must be some lock based change in sequence or conflict. Update: to clarify, if 1 ran an update on rows and 2 ran a delete, row lock conflict would cause one of them to fail. > ACID IOW implemented using base may delete too much data > > > Key: HIVE-18570 > URL: https://issues.apache.org/jira/browse/HIVE-18570 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Priority: Major > > Suppose we have a table with delta_0 insert data. > Txn 1 starts an insert into delta_1. > Txn 2 starts an IOW into base_2. > Txn 2 commits. > Txn 1 commits after txn 2 but its results would be invisible. > If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID > semantics), it seems to me this sequence of events is only possible under > read-uncommitted isolation level (so, 2 deletes rows written by 1). > Under any other isolation level rows written by 1 must survive, or there must > be some lock based change in sequence or conflict. > Update: to clarify, if 1 ran an update on rows instead of an insert, and 2 > still ran an IOW/delete, row lock conflict (or equivalent) should cause one > of them to fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18570) ACID IOW implemented using base may delete too much data
[ https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18570: Description: Suppose we have a table with delta_0 insert data. Txn 1 starts an insert into delta_1. Txn 2 starts an IOW into base_2. Txn 2 commits. Txn 1 commits after txn 2 but its results would be invisible. If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID semantics), it seems to me this sequence of events is only possible under read-uncommitted isolation level (so, 2 deletes rows written by 1). Under any other isolation level rows written by 1 must survive, or there must be some lock based change in sequence or conflict. Update: to clarify, if 1 ran an update on rows and 2 ran a delete, row lock conflict would cause one of them to fail. was: Suppose we have a table with delta_0 insert data. Txn 1 starts an insert into delta_1. Txn 2 starts an IOW into base_2. Txn 2 commits. Txn 1 commits after txn 2 but its results would be invisible. If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID semantics), it seems to me this sequence of events is only possible under read-uncommitted isolation level (so, 2 deletes rows written by 1). Under any other isolation level rows written by 1 must survive, or there must be some lock based change in sequence or conflict. > ACID IOW implemented using base may delete too much data > > > Key: HIVE-18570 > URL: https://issues.apache.org/jira/browse/HIVE-18570 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Priority: Major > > Suppose we have a table with delta_0 insert data. > Txn 1 starts an insert into delta_1. > Txn 2 starts an IOW into base_2. > Txn 2 commits. > Txn 1 commits after txn 2 but its results would be invisible. > If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID > semantics), it seems to me this sequence of events is only possible under > read-uncommitted isolation level (so, 2 deletes rows written by 1). > Under any other isolation level rows written by 1 must survive, or there must > be some lock based change in sequence or conflict. > Update: to clarify, if 1 ran an update on rows and 2 ran a delete, row lock > conflict would cause one of them to fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18570) ACID IOW implemented using base may delete too much data
[ https://issues.apache.org/jira/browse/HIVE-18570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343895#comment-16343895 ] Sergey Shelukhin commented on HIVE-18570: - cc [~ekoifman] [~steveyeom2017] [~gopalv] > ACID IOW implemented using base may delete too much data > > > Key: HIVE-18570 > URL: https://issues.apache.org/jira/browse/HIVE-18570 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Priority: Major > > Suppose we have a table with delta_0 insert data. > Txn 1 starts an insert into delta_1. > Txn 2 starts an IOW into base_2. > Txn 2 commits. > Txn 1 commits after txn 2 but its results would be invisible. > If we treat IOW foo like DELETE FROM foo (to reason about it w.r.t. ACID > semantics), it seems to me this sequence of events is only possible under > read-uncommitted isolation level (so, 2 deletes rows written by 1). > Under any other isolation level rows written by 1 must survive, or there must > be some lock based change in sequence or conflict. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343876#comment-16343876 ] Vihang Karajgaonkar commented on HIVE-18472: Updated patch looks fine. Tested using {{hiveserver2}} script and also confirmed by running \{{schematool}} locally. We still see a warning in the output but the fix needs to be on Hadoop side to not include the log4j jars from the share/common/lib directory. > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18472.1.patch, HIVE-18472.2.patch, > HIVE-18472.3.patch > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343877#comment-16343877 ] Vihang Karajgaonkar commented on HIVE-18472: +1 on the latest patch. > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18472.1.patch, HIVE-18472.2.patch, > HIVE-18472.3.patch > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18563) "Load data into table" behavior is different between 1.2.1 and 1.2.1000
[ https://issues.apache.org/jira/browse/HIVE-18563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-18563: - Assignee: Deepak Jaiswal > "Load data into table" behavior is different between 1.2.1 and 1.2.1000 > --- > > Key: HIVE-18563 > URL: https://issues.apache.org/jira/browse/HIVE-18563 > Project: Hive > Issue Type: Bug > Components: Hive, HiveServer2 > Environment: * OS : CentOS6 > * JDK : 1.8.0_152(Oracle) > * HDP : 2.3.2.0 and 2.6.2.0 > * Hive : 1.2.1.2.3.2.0-2950 and 1.2.1000.2.6.2.0-205 >Reporter: Junichi Oda >Assignee: Deepak Jaiswal >Priority: Major > > After upgrading HDP from 2.3.2.0 to 2.6.2.0, the "load data into table" > behavior changed. > Data is input hourly, All files have the same name. > {code:java} > /user/user1/logs/mmdd/00/part-r-0.gz > /user/user1/logs/mmdd/01/part-r-0.gz > /user/user1/logs/mmdd/02/part-r-0.gz > /user/user1/logs/mmdd/03/part-r-0.gz > ・・・ > /user/user1/logs/mmdd/22/part-r-0.gz > /user/user1/logs/mmdd/23/part-r-0.gz > {code} > Before upgrade (HDP 2.3.2.0 ) > {code:java} > HQL > hive> load data inpath '/user/user1/logs/mmdd/*/*.gz' into table > sample_db.sample_tbl partition (dt='mmdd'); > > > Result > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_1.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_10.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_11.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_12.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_13.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_14.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_15.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_16.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_17.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_18.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_19.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_2.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_20.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_21.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_22.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_23.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_3.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_4.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_5.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_6.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_7.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_8.gz > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0_copy_9.gz > {code} > All files were renamed into part-r-_copy_*.gz without the file > part-r-.gz. > After upgrade(HDP 2.6.2.0 ) > {code:java} > HQL > hive> load data inpath '/user/user1/logs/mmdd/*/*.gz' into table > sample_db.sample_tbl partition (dt='mmdd'); > > Result > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd > /hive/warehouse/sample_db.db/sample_tbl/dt=mmdd/part-r-0.gz > {code} > There is only part-r-.gz. > This file was the same file as part-r-_copy_23.gz. > When files are loaded one by one, I can load all files like as HDP 2.3.2.0 > environment. > Why is the behavior different between 2.3.2.0 and 2.6.2.0 ? > Thanks in advance > > https://community.hortonworks.com/questions/158176/load-data-into-table-behavior-is-different-between.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343799#comment-16343799 ] Hive QA commented on HIVE-18472: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 6924b9c | | modules | C: . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8911/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18472.1.patch, HIVE-18472.2.patch, > HIVE-18472.3.patch > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18526) Backport HIVE-16886 to Hive 2
[ https://issues.apache.org/jira/browse/HIVE-18526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343797#comment-16343797 ] Vihang Karajgaonkar commented on HIVE-18526: Okay, looks like [~anishek] has raised a good point on this patch on HIVE-16886. Copying it here for reference: {quote}[http://www.datanucleus.org/products/accessplatform_4_1/jdo/transactions.html] {quote} {quote}By default DataNucleus does not currently lock the objects fetched with pessimistic locking, but you can configure this behaviour for RDBMS datastores by setting the persistence property datanucleus.SerializeRead to true. This will result in all "SELECT ... FROM ..." statements being changed to be "SELECT ... FROM ... FOR UPDATE". This will be applied only where the underlying RDBMS supports the "FOR UPDATE" syntax. {quote} {quote}for SQLServer this will not work since the syntax is different. the code shows us using datanuclues 4.1 version btw. {quote} This patch may not work with SQL Server as suggested above. Hi [~akolb] can you investigate if we can backport the original patch of HIVE-16886 to branch-2. The patch you attached is a lot simpler but I think since it may not work with MSSQL we will still have a problem. Also, I think it is a good idea to keep branch-2 and master similar as far as individual patches are concerned. Thanks! > Backport HIVE-16886 to Hive 2 > - > > Key: HIVE-18526 > URL: https://issues.apache.org/jira/browse/HIVE-18526 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 2.3.3 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-18526.01-branch-2.patch, > HIVE-18526.02-branch-2.patch > > > The fix for HIVE-16886 isn't in Hive 2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18237) missing results for insert_only table after DP insert
[ https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343788#comment-16343788 ] Hive QA commented on HIVE-18237: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12908160/HIVE-18237.04.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 12792 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_20] (batchId=74) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes] (batchId=163) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveBackKill (batchId=235) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8910/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8910/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8910/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 24 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12908160 - PreCommit-HIVE-Build > missing results for insert_only table after DP insert > - > > Key: HIVE-18237 > URL: https://issues.apache.org/jira/browse/HIVE-18237 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Zoltan Haindrich >Assignee: Steve Yeom >Priority: Major > Attachments: HIVE-18237.01.patch, HIVE-18237.02.patch, > HIVE-18237.03.patch, HIVE-18237.04.patch > > > {code} > set hive.stats.column.autogather=false; > set hive.exec.dynamic.partition.mode=nonstrict; > set hive.exec.max.dynamic.partitions.pernode=200; > set hive.exec.max.dynamic.partitions=200; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create table i0 (p int,v int); > insert into i0 values > (0,0), > (2,2), > (3,3); > create table p0 (v int) partitioned by (p int) stored as orc > tblproperties ("transactional"="true", > "transactional_properties"="insert_only"); > explain insert overwrite table p0 partition (p) select * from i0 where v < 3; > insert overwrite table p0 partition (p) select * from i0 where v < 3; > select count(*) from p0 where v!=1; > {code} > The table p0 should contain {{2}} rows at this point; but the result is {{0}}. > * seems to be specific to insert_only tables > * the existing data appears if an {{insert into}} is executed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18529) Vectorization: Add a debug config option to disable scratch column reuse
[ https://issues.apache.org/jira/browse/HIVE-18529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343776#comment-16343776 ] Matt McCline commented on HIVE-18529: - +1 LGTM > Vectorization: Add a debug config option to disable scratch column reuse > > > Key: HIVE-18529 > URL: https://issues.apache.org/jira/browse/HIVE-18529 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 3.0.0, 2.3.2 >Reporter: Gopal V >Assignee: Gopal V >Priority: Major > Attachments: HIVE-18529-branch-2.patch, HIVE-18529.1.patch, > HIVE-18529.2.patch > > > Debugging scratch column reuse is particularly painful and slow, adding a > config allows for this to be done without rebuilds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18553) VectorizedParquetReader fails after adding a new column to table
[ https://issues.apache.org/jira/browse/HIVE-18553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343725#comment-16343725 ] Vihang Karajgaonkar commented on HIVE-18553: Hi [~Ferd] Thanks for the patch. Can you please test with the patch what happens when we change the column types? Does it work in non-vectorized code? > VectorizedParquetReader fails after adding a new column to table > > > Key: HIVE-18553 > URL: https://issues.apache.org/jira/browse/HIVE-18553 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0, 2.4.0, 2.3.2 >Reporter: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-18553.2.patch, HIVE-18553.patch > > > VectorizedParquetReader throws an exception when trying to reading from a > parquet table on which new columns are added. Steps to reproduce below: > {code} > 0: jdbc:hive2://localhost:1/default> desc test_p; > +---++--+ > | col_name | data_type | comment | > +---++--+ > | t1| tinyint| | > | t2| tinyint| | > | i1| int| | > | i2| int| | > +---++--+ > 0: jdbc:hive2://localhost:1/default> set hive.fetch.task.conversion=none; > 0: jdbc:hive2://localhost:1/default> set > hive.vectorized.execution.enabled=true; > 0: jdbc:hive2://localhost:1/default> alter table test_p add columns (ts > timestamp); > 0: jdbc:hive2://localhost:1/default> select * from test_p; > Error: Error while processing statement: FAILED: Execution Error, return code > 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2) > {code} > Following exception is seen in the logs > {code} > Caused by: java.lang.IllegalArgumentException: [ts] BINARY is not in the > store: [[i1] INT32, [i2] INT32, [t1] INT32, [t2] INT32] 3 > at > org.apache.parquet.hadoop.ColumnChunkPageReadStore.getPageReader(ColumnChunkPageReadStore.java:160) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:479) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:432) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:393) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:345) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:88) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:167) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:52) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:229) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:142) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:199) > ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:185) > ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52) > ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) > ~[hadoop-mapreduce-client-core-3.0.0-alpha3-cdh6.x-SNAPSHOT.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > ~[hadoop-mapreduce-client-core-3.0.0-alpha
[jira] [Commented] (HIVE-16886) HMS log notifications may have duplicated event IDs if multiple HMS are running concurrently
[ https://issues.apache.org/jira/browse/HIVE-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343724#comment-16343724 ] Na Li commented on HIVE-16886: -- * This will be effective only where the underlying RDBMS supports the "FOR UPDATE" syntax. [http://www.datanucleus.org/products/accessplatform_4_1/jdo/transactions.html] * {color:#ff}MySql, Oracle, DB2, Derby{color} {color:#ff}support this "SELECT FOR UPDATE" syntax.{color} * {color:#33}[https://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html]{color} * {color:#33}[https://www.toadworld.com/platforms/oracle/w/wiki/1875.select-for-update] {color} * {color:#33}[https://www.ibm.com/support/knowledgecenter/en/SSEPEK_11.0.0/sqlref/src/tpc/db2z_sql_selectstatementexamples.html] {color} * {color:#33}[https://www.postgresql.org/docs/9.0/static/sql-select.html] {color} * {color:#33}[https://db.apache.org/derby/docs/10.2/ref/rrefsqlj31783.html] {color} * {color:#33}The SQL standard specifies a {{FOR UPDATE}} clause to be applicable for cursors. Most databases interpret this as being applicable for all {{SELECT}}statements. An {color:#ff9900}exception{color} to this rule are the {color:#ff9900}CUBRID{color} and {color:#ff9900}SQL Server{color} databases, that do not allow for any {{FOR UPDATE}} clause in a regular SQL [https://www.jooq.org/doc/3.6/manual/sql-building/sql-statements/select-statement/for-update-clause/]{color} > HMS log notifications may have duplicated event IDs if multiple HMS are > running concurrently > > > Key: HIVE-16886 > URL: https://issues.apache.org/jira/browse/HIVE-16886 > Project: Hive > Issue Type: Bug > Components: Hive, Metastore >Affects Versions: 3.0.0, 2.3.2, 2.3.3 >Reporter: Sergio Peña >Assignee: anishek >Priority: Major > Labels: TODOC3.0 > Fix For: 3.0.0 > > Attachments: HIVE-16886.1.patch, HIVE-16886.2.patch, > HIVE-16886.3.patch, HIVE-16886.4.patch, HIVE-16886.5.patch, > HIVE-16886.6.patch, HIVE-16886.7.patch, HIVE-16886.8.patch, > datastore-identity-holes.diff > > > When running multiple Hive Metastore servers and DB notifications are > enabled, I could see that notifications can be persisted with a duplicated > event ID. > This does not happen when running multiple threads in a single HMS node due > to the locking acquired on the DbNotificationsLog class, but multiple HMS > could cause conflicts. > The issue is in the ObjectStore#addNotificationEvent() method. The event ID > fetched from the datastore is used for the new notification, incremented in > the server itself, then persisted or updated back to the datastore. If 2 > servers read the same ID, then these 2 servers write a new notification with > the same ID. > The event ID is not unique nor a primary key. > Here's a test case using the TestObjectStore class that confirms this issue: > {noformat} > @Test > public void testConcurrentAddNotifications() throws ExecutionException, > InterruptedException { > final int NUM_THREADS = 2; > CountDownLatch countIn = new CountDownLatch(NUM_THREADS); > CountDownLatch countOut = new CountDownLatch(1); > HiveConf conf = new HiveConf(); > conf.setVar(HiveConf.ConfVars.METASTORE_EXPRESSION_PROXY_CLASS, > MockPartitionExpressionProxy.class.getName()); > ExecutorService executorService = > Executors.newFixedThreadPool(NUM_THREADS); > FutureTask tasks[] = new FutureTask[NUM_THREADS]; > for (int i=0; i final int n = i; > tasks[i] = new FutureTask(new Callable() { > @Override > public Void call() throws Exception { > ObjectStore store = new ObjectStore(); > store.setConf(conf); > NotificationEvent dbEvent = > new NotificationEvent(0, 0, > EventMessage.EventType.CREATE_DATABASE.toString(), "CREATE DATABASE DB" + n); > System.out.println("ADDING NOTIFICATION"); > countIn.countDown(); > countOut.await(); > store.addNotificationEvent(dbEvent); > System.out.println("FINISH NOTIFICATION"); > return null; > } > }); > executorService.execute(tasks[i]); > } > countIn.await(); > countOut.countDown(); > for (int i = 0; i < NUM_THREADS; ++i) { > tasks[i].get(); > } > NotificationEventResponse eventResponse = > objectStore.getNextNotification(new NotificationEventRequest()); > Assert.assertEquals(2, eventResponse.getEventsSize()); > Assert.assertEquals(1, eventResponse.getEvents().get(0).getEventId()); > // This fails because the next notification has an event ID = 1 > Assert.assertEquals(2, eventResponse.getEvents().get(1).getEventId()); >
[jira] [Commented] (HIVE-18237) missing results for insert_only table after DP insert
[ https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343695#comment-16343695 ] Hive QA commented on HIVE-18237: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 47s{color} | {color:red} ql: The patch generated 1 new + 1043 unchanged - 0 fixed = 1044 total (was 1043) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 6924b9c | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8910/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8910/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > missing results for insert_only table after DP insert > - > > Key: HIVE-18237 > URL: https://issues.apache.org/jira/browse/HIVE-18237 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Zoltan Haindrich >Assignee: Steve Yeom >Priority: Major > Attachments: HIVE-18237.01.patch, HIVE-18237.02.patch, > HIVE-18237.03.patch, HIVE-18237.04.patch > > > {code} > set hive.stats.column.autogather=false; > set hive.exec.dynamic.partition.mode=nonstrict; > set hive.exec.max.dynamic.partitions.pernode=200; > set hive.exec.max.dynamic.partitions=200; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create table i0 (p int,v int); > insert into i0 values > (0,0), > (2,2), > (3,3); > create table p0 (v int) partitioned by (p int) stored as orc > tblproperties ("transactional"="true", > "transactional_properties"="insert_only"); > explain insert overwrite table p0 partition (p) select * from i0 where v < 3; > insert overwrite table p0 partition (p) select * from i0 where v < 3; > select count(*) from p0 where v!=1; > {code} > The table p0 should contain {{2}} rows at this point; but the result is {{0}}. > * seems to be specific to insert_only tables > * the existing data appears if an {{insert into}} is executed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18472) Beeline gives log4j warnings
[ https://issues.apache.org/jira/browse/HIVE-18472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-18472: --- Attachment: HIVE-18472.3.patch > Beeline gives log4j warnings > > > Key: HIVE-18472 > URL: https://issues.apache.org/jira/browse/HIVE-18472 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18472.1.patch, HIVE-18472.2.patch, > HIVE-18472.3.patch > > > Starting Beeline gives the following warnings multiple times: > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/opt/cloudera/parcels/CDH-6.x-1.cdh6.x.p0.215261/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > ERROR StatusLogger No log4j2 configuration file found. Using default > configuration: logging only errors to the console. Set system property > 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show > Log4j2 internal initialization logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18499) Amend point lookup tests to check for data
[ https://issues.apache.org/jira/browse/HIVE-18499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-18499: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks [~janulatha] for your work! > Amend point lookup tests to check for data > -- > > Key: HIVE-18499 > URL: https://issues.apache.org/jira/browse/HIVE-18499 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18499.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18569) Hive Druid indexing not dealing with decimals in correct way.
[ https://issues.apache.org/jira/browse/HIVE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343658#comment-16343658 ] Hive QA commented on HIVE-18569: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12908159/HIVE-18569.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 12792 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes] (batchId=163) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.common.metrics.metrics2.TestCodahaleMetrics.testFileReporting (batchId=258) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=235) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8909/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8909/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8909/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 24 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12908159 - PreCommit-HIVE-Build > Hive Druid indexing not dealing with decimals in correct way. > - > > Key: HIVE-18569 > URL: https://issues.apache.org/jira/browse/HIVE-18569 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-18569.patch > > > Currently, a decimal column is indexed as double in druid. > This should not happen and either the user has to add an explicit cast or we > can add a flag to enable approximation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18536) IOW + DP is broken for insert-only ACID
[ https://issues.apache.org/jira/browse/HIVE-18536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343649#comment-16343649 ] Eugene Koifman commented on HIVE-18536: --- I left 1 comment on RB. The code to create/parse base/delta dir names has spread to different places from being localized to AcidUtils. This will come back to haunt us but not introduced in this patch. there is a bunch of new checkstyle warning otherwise it looks ok. > IOW + DP is broken for insert-only ACID > --- > > Key: HIVE-18536 > URL: https://issues.apache.org/jira/browse/HIVE-18536 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18536.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18569) Hive Druid indexing not dealing with decimals in correct way.
[ https://issues.apache.org/jira/browse/HIVE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343590#comment-16343590 ] Hive QA commented on HIVE-18569: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} common: The patch generated 1 new + 424 unchanged - 0 fixed = 425 total (was 424) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s{color} | {color:red} druid-handler: The patch generated 8 new + 36 unchanged - 1 fixed = 44 total (was 37) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 894efdb | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8909/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8909/yetus/diff-checkstyle-druid-handler.txt | | modules | C: common ql druid-handler U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8909/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Hive Druid indexing not dealing with decimals in correct way. > - > > Key: HIVE-18569 > URL: https://issues.apache.org/jira/browse/HIVE-18569 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-18569.patch > > > Currently, a decimal column is indexed as double in druid. > This should not happen and either the user has to add an explicit cast or we > can add a flag to enable approximation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18237) missing results for insert_only table after DP insert
[ https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343556#comment-16343556 ] Steve Yeom commented on HIVE-18237: --- Hi [~sershe], Could you look at the patch 04 for this jira? Thank you, Steve. Please let me know if you need review board. > missing results for insert_only table after DP insert > - > > Key: HIVE-18237 > URL: https://issues.apache.org/jira/browse/HIVE-18237 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Zoltan Haindrich >Assignee: Steve Yeom >Priority: Major > Attachments: HIVE-18237.01.patch, HIVE-18237.02.patch, > HIVE-18237.03.patch, HIVE-18237.04.patch > > > {code} > set hive.stats.column.autogather=false; > set hive.exec.dynamic.partition.mode=nonstrict; > set hive.exec.max.dynamic.partitions.pernode=200; > set hive.exec.max.dynamic.partitions=200; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create table i0 (p int,v int); > insert into i0 values > (0,0), > (2,2), > (3,3); > create table p0 (v int) partitioned by (p int) stored as orc > tblproperties ("transactional"="true", > "transactional_properties"="insert_only"); > explain insert overwrite table p0 partition (p) select * from i0 where v < 3; > insert overwrite table p0 partition (p) select * from i0 where v < 3; > select count(*) from p0 where v!=1; > {code} > The table p0 should contain {{2}} rows at this point; but the result is {{0}}. > * seems to be specific to insert_only tables > * the existing data appears if an {{insert into}} is executed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18237) missing results for insert_only table after DP insert
[ https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343553#comment-16343553 ] Steve Yeom commented on HIVE-18237: --- Added patch 04 after fixing checkstyle warnings. > missing results for insert_only table after DP insert > - > > Key: HIVE-18237 > URL: https://issues.apache.org/jira/browse/HIVE-18237 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Zoltan Haindrich >Assignee: Steve Yeom >Priority: Major > Attachments: HIVE-18237.01.patch, HIVE-18237.02.patch, > HIVE-18237.03.patch, HIVE-18237.04.patch > > > {code} > set hive.stats.column.autogather=false; > set hive.exec.dynamic.partition.mode=nonstrict; > set hive.exec.max.dynamic.partitions.pernode=200; > set hive.exec.max.dynamic.partitions=200; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create table i0 (p int,v int); > insert into i0 values > (0,0), > (2,2), > (3,3); > create table p0 (v int) partitioned by (p int) stored as orc > tblproperties ("transactional"="true", > "transactional_properties"="insert_only"); > explain insert overwrite table p0 partition (p) select * from i0 where v < 3; > insert overwrite table p0 partition (p) select * from i0 where v < 3; > select count(*) from p0 where v!=1; > {code} > The table p0 should contain {{2}} rows at this point; but the result is {{0}}. > * seems to be specific to insert_only tables > * the existing data appears if an {{insert into}} is executed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18237) missing results for insert_only table after DP insert
[ https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-18237: -- Attachment: HIVE-18237.04.patch > missing results for insert_only table after DP insert > - > > Key: HIVE-18237 > URL: https://issues.apache.org/jira/browse/HIVE-18237 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Zoltan Haindrich >Assignee: Steve Yeom >Priority: Major > Attachments: HIVE-18237.01.patch, HIVE-18237.02.patch, > HIVE-18237.03.patch, HIVE-18237.04.patch > > > {code} > set hive.stats.column.autogather=false; > set hive.exec.dynamic.partition.mode=nonstrict; > set hive.exec.max.dynamic.partitions.pernode=200; > set hive.exec.max.dynamic.partitions=200; > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > create table i0 (p int,v int); > insert into i0 values > (0,0), > (2,2), > (3,3); > create table p0 (v int) partitioned by (p int) stored as orc > tblproperties ("transactional"="true", > "transactional_properties"="insert_only"); > explain insert overwrite table p0 partition (p) select * from i0 where v < 3; > insert overwrite table p0 partition (p) select * from i0 where v < 3; > select count(*) from p0 where v!=1; > {code} > The table p0 should contain {{2}} rows at this point; but the result is {{0}}. > * seems to be specific to insert_only tables > * the existing data appears if an {{insert into}} is executed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18569) Hive Druid indexing not dealing with decimals in correct way.
[ https://issues.apache.org/jira/browse/HIVE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-18569: Attachment: HIVE-18569.patch > Hive Druid indexing not dealing with decimals in correct way. > - > > Key: HIVE-18569 > URL: https://issues.apache.org/jira/browse/HIVE-18569 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-18569.patch > > > Currently, a decimal column is indexed as double in druid. > This should not happen and either the user has to add an explicit cast or we > can add a flag to enable approximation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18569) Hive Druid indexing not dealing with decimals in correct way.
[ https://issues.apache.org/jira/browse/HIVE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343517#comment-16343517 ] Nishant Bangarwa commented on HIVE-18569: - +cc [~bslim] [~jcamachorodriguez] small change, please review. > Hive Druid indexing not dealing with decimals in correct way. > - > > Key: HIVE-18569 > URL: https://issues.apache.org/jira/browse/HIVE-18569 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-18569.patch > > > Currently, a decimal column is indexed as double in druid. > This should not happen and either the user has to add an explicit cast or we > can add a flag to enable approximation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18569) Hive Druid indexing not dealing with decimals in correct way.
[ https://issues.apache.org/jira/browse/HIVE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-18569: Status: Patch Available (was: Open) > Hive Druid indexing not dealing with decimals in correct way. > - > > Key: HIVE-18569 > URL: https://issues.apache.org/jira/browse/HIVE-18569 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-18569.patch > > > Currently, a decimal column is indexed as double in druid. > This should not happen and either the user has to add an explicit cast or we > can add a flag to enable approximation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18569) Hive Druid indexing not dealing with decimals in correct way.
[ https://issues.apache.org/jira/browse/HIVE-18569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa reassigned HIVE-18569: --- > Hive Druid indexing not dealing with decimals in correct way. > - > > Key: HIVE-18569 > URL: https://issues.apache.org/jira/browse/HIVE-18569 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > > Currently, a decimal column is indexed as double in druid. > This should not happen and either the user has to add an explicit cast or we > can add a flag to enable approximation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18544) Create tests to cover appendPartition methods
[ https://issues.apache.org/jira/browse/HIVE-18544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343467#comment-16343467 ] Hive QA commented on HIVE-18544: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12908134/HIVE-18544.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 12859 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes] (batchId=163) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded] (batchId=206) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8908/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8908/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8908/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 21 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12908134 - PreCommit-HIVE-Build > Create tests to cover appendPartition methods > - > > Key: HIVE-18544 > URL: https://issues.apache.org/jira/browse/HIVE-18544 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18544.1.patch, HIVE-18544.2.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - Partition appendPartition(String, String, List) > - Partition appendPartition(String, String, String){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input
[ https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343429#comment-16343429 ] Matt McCline commented on HIVE-18561: - Committed to master. [~teddy.choi] thank you for your review! > Vectorization: Current vector PTF doesn't work under GroupBy and is designed > for reduce-shuffle input > - > > Key: HIVE-18561 > URL: https://issues.apache.org/jira/browse/HIVE-18561 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, > HIVE-18561.03.patch, HIVE-18561.04.patch > > > Need to add validation check in Vectorizer that doesn't vectorize unless PTF > is under reduce-shuffle (with optional SELECT in-between). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input
[ https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18561: Resolution: Fixed Status: Resolved (was: Patch Available) > Vectorization: Current vector PTF doesn't work under GroupBy and is designed > for reduce-shuffle input > - > > Key: HIVE-18561 > URL: https://issues.apache.org/jira/browse/HIVE-18561 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, > HIVE-18561.03.patch, HIVE-18561.04.patch > > > Need to add validation check in Vectorizer that doesn't vectorize unless PTF > is under reduce-shuffle (with optional SELECT in-between). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18557) q.outs: fix issues caused by q.out_spark files
[ https://issues.apache.org/jira/browse/HIVE-18557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343422#comment-16343422 ] Peter Vary commented on HIVE-18557: --- :) :) :) Thanks [~kgyrtkirk] for commiting, and [~abstractdog] for taking care of this! > q.outs: fix issues caused by q.out_spark files > -- > > Key: HIVE-18557 > URL: https://issues.apache.org/jira/browse/HIVE-18557 > Project: Hive > Issue Type: Bug >Reporter: Laszlo Bodor >Assignee: Laszlo Bodor >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18557.01.patch > > > HIVE-18061 caused some issues in yetus check by introducing q.out_spark files. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18544) Create tests to cover appendPartition methods
[ https://issues.apache.org/jira/browse/HIVE-18544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343393#comment-16343393 ] Peter Vary commented on HIVE-18544: --- +1 pending tests > Create tests to cover appendPartition methods > - > > Key: HIVE-18544 > URL: https://issues.apache.org/jira/browse/HIVE-18544 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18544.1.patch, HIVE-18544.2.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - Partition appendPartition(String, String, List) > - Partition appendPartition(String, String, String){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18544) Create tests to cover appendPartition methods
[ https://issues.apache.org/jira/browse/HIVE-18544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343389#comment-16343389 ] Hive QA commented on HIVE-18544: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} standalone-metastore: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 10m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 58bf1a8 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8908/yetus/diff-checkstyle-standalone-metastore.txt | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8908/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create tests to cover appendPartition methods > - > > Key: HIVE-18544 > URL: https://issues.apache.org/jira/browse/HIVE-18544 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18544.1.patch, HIVE-18544.2.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - Partition appendPartition(String, String, List) > - Partition appendPartition(String, String, String){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18561) Vectorization: Current vector PTF doesn't work under GroupBy and is designed for reduce-shuffle input
[ https://issues.apache.org/jira/browse/HIVE-18561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343375#comment-16343375 ] Teddy Choi commented on HIVE-18561: --- +1. Looks good to me. > Vectorization: Current vector PTF doesn't work under GroupBy and is designed > for reduce-shuffle input > - > > Key: HIVE-18561 > URL: https://issues.apache.org/jira/browse/HIVE-18561 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18561.01.patch, HIVE-18561.02.patch, > HIVE-18561.03.patch, HIVE-18561.04.patch > > > Need to add validation check in Vectorizer that doesn't vectorize unless PTF > is under reduce-shuffle (with optional SELECT in-between). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18566) Create tests to cover adding partitions from PartitionSpec
[ https://issues.apache.org/jira/browse/HIVE-18566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343374#comment-16343374 ] Hive QA commented on HIVE-18566: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12908132/HIVE-18566.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 12881 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes] (batchId=163) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testQueueing (batchId=287) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8907/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8907/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8907/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 21 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12908132 - PreCommit-HIVE-Build > Create tests to cover adding partitions from PartitionSpec > -- > > Key: HIVE-18566 > URL: https://issues.apache.org/jira/browse/HIVE-18566 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18566.1.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - int add_partitions_pspec(PartitionSpecProxy){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18373) Make it easier to search for column name in a table
[ https://issues.apache.org/jira/browse/HIVE-18373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343363#comment-16343363 ] Siddhant Saraf commented on HIVE-18373: --- And here is filtering for databases: {code:java} show databases; --works show databases '*abc*'; --doesn't work show databases like '*abc*'; --works {code} > Make it easier to search for column name in a table > --- > > Key: HIVE-18373 > URL: https://issues.apache.org/jira/browse/HIVE-18373 > Project: Hive > Issue Type: New Feature >Reporter: Siddhant Saraf >Assignee: Madhudeep Petwal >Priority: Minor > > Within a database, to filter for tables with the string 'abc' in its name, I > can use something like: > {code:java} > hive> use my_database; > hive> show tables '*abc*'; > {code} > It would be great if I can do something similar to search within the list of > columns in a table. > I have a table with around 3200 columns. Searching for the column of interest > is an onerous task after doing a {{describe}} on it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18373) Make it easier to search for column name in a table
[ https://issues.apache.org/jira/browse/HIVE-18373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343360#comment-16343360 ] Siddhant Saraf commented on HIVE-18373: --- The option to use the keyword "like" made me think that the following would also work: {code:java} --doesn't work: hive> show tables like '%abc%'; {code} However, looks like the '%' wildcard doesn't work here. It only accepts the '*' wildcard. > Make it easier to search for column name in a table > --- > > Key: HIVE-18373 > URL: https://issues.apache.org/jira/browse/HIVE-18373 > Project: Hive > Issue Type: New Feature >Reporter: Siddhant Saraf >Assignee: Madhudeep Petwal >Priority: Minor > > Within a database, to filter for tables with the string 'abc' in its name, I > can use something like: > {code:java} > hive> use my_database; > hive> show tables '*abc*'; > {code} > It would be great if I can do something similar to search within the list of > columns in a table. > I have a table with around 3200 columns. Searching for the column of interest > is an onerous task after doing a {{describe}} on it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18567) ObjectStore.getPartitionNamesNoTxn doesn't handle max param properly
[ https://issues.apache.org/jira/browse/HIVE-18567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Szita reassigned HIVE-18567: - > ObjectStore.getPartitionNamesNoTxn doesn't handle max param properly > > > Key: HIVE-18567 > URL: https://issues.apache.org/jira/browse/HIVE-18567 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > > As per [this HMS API test > case|https://github.com/apache/hive/commit/fa0a8d27d4149cc5cc2dbb49d8eb6b03f46bc279#diff-25c67d898000b53e623a6df9221aad5dR1044] > listing partition names doesn't check tha max param against > MetaStoreConf.LIMIT_PARTITION_REQUEST (as other methods do by > checkLimitNumberOfPartitionsByFilter), and also behaves differently on max=0 > setting compared to other methods. > We should bring this into consistency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18566) Create tests to cover adding partitions from PartitionSpec
[ https://issues.apache.org/jira/browse/HIVE-18566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343327#comment-16343327 ] Hive QA commented on HIVE-18566: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} standalone-metastore: The patch generated 12 new + 15 unchanged - 0 fixed = 27 total (was 15) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 58bf1a8 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8907/yetus/diff-checkstyle-standalone-metastore.txt | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8907/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create tests to cover adding partitions from PartitionSpec > -- > > Key: HIVE-18566 > URL: https://issues.apache.org/jira/browse/HIVE-18566 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18566.1.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - int add_partitions_pspec(PartitionSpecProxy){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18544) Create tests to cover appendPartition methods
[ https://issues.apache.org/jira/browse/HIVE-18544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marta Kuczora updated HIVE-18544: - Attachment: HIVE-18544.2.patch > Create tests to cover appendPartition methods > - > > Key: HIVE-18544 > URL: https://issues.apache.org/jira/browse/HIVE-18544 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18544.1.patch, HIVE-18544.2.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - Partition appendPartition(String, String, List) > - Partition appendPartition(String, String, String){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18554) Fix false positive test ql.io.parquet.TestHiveSchemaConverter.testMap
[ https://issues.apache.org/jira/browse/HIVE-18554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343315#comment-16343315 ] Adam Szita commented on HIVE-18554: --- Thanks Peter! > Fix false positive test ql.io.parquet.TestHiveSchemaConverter.testMap > -- > > Key: HIVE-18554 > URL: https://issues.apache.org/jira/browse/HIVE-18554 > Project: Hive > Issue Type: Bug >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18554.0.patch > > > In test case {{testMap}} the AssertEquals check was returning a false > positive result, due to a Parquet bug: > Original types were not asserted in equals method, this has been fixed here: > [https://github.com/apache/parquet-mr/commit/878ebcd0bc2592fa9d5dda01117c07bc3c40bb33] > What this test would produce after the Parquet fix is this: > {code:java} > expected: optional group mapCol (MAP) { > repeated group map (MAP_KEY_VALUE) { >required binary key; >optional binary value; > } > } > } > > but was: optional group mapCol (MAP) { > repeated group map (MAP_KEY_VALUE) { >required binary key (UTF8); >optional binary value (UTF8); > } > } > } > >{code} > It affects testMapDecimal test case similarly. > Once we upgrade to a Parquet lib with this fix in place our test case will > produce failure too, hence I propose fixing it now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18554) Fix false positive test ql.io.parquet.TestHiveSchemaConverter.testMap
[ https://issues.apache.org/jira/browse/HIVE-18554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-18554: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks for your patch [~szita]! > Fix false positive test ql.io.parquet.TestHiveSchemaConverter.testMap > -- > > Key: HIVE-18554 > URL: https://issues.apache.org/jira/browse/HIVE-18554 > Project: Hive > Issue Type: Bug >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18554.0.patch > > > In test case {{testMap}} the AssertEquals check was returning a false > positive result, due to a Parquet bug: > Original types were not asserted in equals method, this has been fixed here: > [https://github.com/apache/parquet-mr/commit/878ebcd0bc2592fa9d5dda01117c07bc3c40bb33] > What this test would produce after the Parquet fix is this: > {code:java} > expected: optional group mapCol (MAP) { > repeated group map (MAP_KEY_VALUE) { >required binary key; >optional binary value; > } > } > } > > but was: optional group mapCol (MAP) { > repeated group map (MAP_KEY_VALUE) { >required binary key (UTF8); >optional binary value (UTF8); > } > } > } > >{code} > It affects testMapDecimal test case similarly. > Once we upgrade to a Parquet lib with this fix in place our test case will > produce failure too, hence I propose fixing it now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18554) Fix false positive test ql.io.parquet.TestHiveSchemaConverter.testMap
[ https://issues.apache.org/jira/browse/HIVE-18554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343301#comment-16343301 ] Adam Szita commented on HIVE-18554: --- Ptest and checkstyle failures are irrelevant, ready for commit. [~pvary] can you take a look please? > Fix false positive test ql.io.parquet.TestHiveSchemaConverter.testMap > -- > > Key: HIVE-18554 > URL: https://issues.apache.org/jira/browse/HIVE-18554 > Project: Hive > Issue Type: Bug >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-18554.0.patch > > > In test case {{testMap}} the AssertEquals check was returning a false > positive result, due to a Parquet bug: > Original types were not asserted in equals method, this has been fixed here: > [https://github.com/apache/parquet-mr/commit/878ebcd0bc2592fa9d5dda01117c07bc3c40bb33] > What this test would produce after the Parquet fix is this: > {code:java} > expected: optional group mapCol (MAP) { > repeated group map (MAP_KEY_VALUE) { >required binary key; >optional binary value; > } > } > } > > but was: optional group mapCol (MAP) { > repeated group map (MAP_KEY_VALUE) { >required binary key (UTF8); >optional binary value (UTF8); > } > } > } > >{code} > It affects testMapDecimal test case similarly. > Once we upgrade to a Parquet lib with this fix in place our test case will > produce failure too, hence I propose fixing it now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18566) Create tests to cover adding partitions from PartitionSpec
[ https://issues.apache.org/jira/browse/HIVE-18566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marta Kuczora updated HIVE-18566: - Attachment: HIVE-18566.1.patch > Create tests to cover adding partitions from PartitionSpec > -- > > Key: HIVE-18566 > URL: https://issues.apache.org/jira/browse/HIVE-18566 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18566.1.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - int add_partitions_pspec(PartitionSpecProxy){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18566) Create tests to cover adding partitions from PartitionSpec
[ https://issues.apache.org/jira/browse/HIVE-18566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marta Kuczora updated HIVE-18566: - Status: Patch Available (was: Open) > Create tests to cover adding partitions from PartitionSpec > -- > > Key: HIVE-18566 > URL: https://issues.apache.org/jira/browse/HIVE-18566 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18566.1.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - int add_partitions_pspec(PartitionSpecProxy){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18566) Create tests to cover adding partitions from PartitionSpec
[ https://issues.apache.org/jira/browse/HIVE-18566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marta Kuczora updated HIVE-18566: - Component/s: Test > Create tests to cover adding partitions from PartitionSpec > -- > > Key: HIVE-18566 > URL: https://issues.apache.org/jira/browse/HIVE-18566 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18566) Create tests to cover adding partitions from PartitionSpec
[ https://issues.apache.org/jira/browse/HIVE-18566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marta Kuczora reassigned HIVE-18566: > Create tests to cover adding partitions from PartitionSpec > -- > > Key: HIVE-18566 > URL: https://issues.apache.org/jira/browse/HIVE-18566 > Project: Hive > Issue Type: Sub-task >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18566) Create tests to cover adding partitions from PartitionSpec
[ https://issues.apache.org/jira/browse/HIVE-18566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marta Kuczora updated HIVE-18566: - Description: The following methods of IMetaStoreClient are covered in this Jira: {code:java} - int add_partitions_pspec(PartitionSpecProxy){code} > Create tests to cover adding partitions from PartitionSpec > -- > > Key: HIVE-18566 > URL: https://issues.apache.org/jira/browse/HIVE-18566 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - int add_partitions_pspec(PartitionSpecProxy){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
[ https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343272#comment-16343272 ] Matt McCline commented on HIVE-18562: - [~teddy.choi] thank you for your review! > Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken > > > Key: HIVE-18562 > URL: https://issues.apache.org/jira/browse/HIVE-18562 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18562.01.patch > > > Altering a CHAR/VARCHAR column's maxLength to a shorter value does not > truncate values when vectorized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
[ https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343267#comment-16343267 ] Matt McCline commented on HIVE-18562: - Committed to master. > Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken > > > Key: HIVE-18562 > URL: https://issues.apache.org/jira/browse/HIVE-18562 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18562.01.patch > > > Altering a CHAR/VARCHAR column's maxLength to a shorter value does not > truncate values when vectorized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18562) Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken
[ https://issues.apache.org/jira/browse/HIVE-18562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18562: Resolution: Fixed Status: Resolved (was: Patch Available) > Vectorization: CHAR/VARCHAR conversion in VectorDeserializeRow is broken > > > Key: HIVE-18562 > URL: https://issues.apache.org/jira/browse/HIVE-18562 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0 > > Attachments: HIVE-18562.01.patch > > > Altering a CHAR/VARCHAR column's maxLength to a shorter value does not > truncate values when vectorized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18486) Create tests to cover add partition methods
[ https://issues.apache.org/jira/browse/HIVE-18486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343230#comment-16343230 ] Marta Kuczora commented on HIVE-18486: -- Thanks a lot [~pvary] for committing the patch. > Create tests to cover add partition methods > --- > > Key: HIVE-18486 > URL: https://issues.apache.org/jira/browse/HIVE-18486 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18486.1.patch, HIVE-18486.2.patch, > HIVE-18486.3.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - Partition add_partition(Partition) > - int add_partitions(List) > - List add_partitions(List, boolean, boolean){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18498) Create tests to cover get and list index methods
[ https://issues.apache.org/jira/browse/HIVE-18498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343232#comment-16343232 ] Marta Kuczora commented on HIVE-18498: -- Thanks a lot [~pvary] for committing the patch. > Create tests to cover get and list index methods > > > Key: HIVE-18498 > URL: https://issues.apache.org/jira/browse/HIVE-18498 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18498.1.patch, HIVE-18498.2.patch, > HIVE-18498.3.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - Index getIndex(String, String, String) > - List listIndexes(String, String, short) > - List listIndexNames(String, String, short){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18479) Create tests to cover dropPartition methods
[ https://issues.apache.org/jira/browse/HIVE-18479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343231#comment-16343231 ] Marta Kuczora commented on HIVE-18479: -- Thanks a lot [~pvary] for committing the patch. > Create tests to cover dropPartition methods > --- > > Key: HIVE-18479 > URL: https://issues.apache.org/jira/browse/HIVE-18479 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18479.1.patch, HIVE-18479.2.patch, > HIVE-18479.3.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code} > - boolean dropPartition(String, String, List, boolean) > - boolean dropPartition(String, String, List, PartitionDropOptions) > - boolean dropPartition(String, String, String, boolean){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18496) Create tests to cover add/alter/drop index methods
[ https://issues.apache.org/jira/browse/HIVE-18496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343233#comment-16343233 ] Marta Kuczora commented on HIVE-18496: -- Thanks a lot [~pvary] for committing the patch. > Create tests to cover add/alter/drop index methods > -- > > Key: HIVE-18496 > URL: https://issues.apache.org/jira/browse/HIVE-18496 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18496.1.patch, HIVE-18496.2.patch, > HIVE-18496.3.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - void createIndex(Index, Table) > - boolean dropIndex(String, String, String, boolean) > - void alter_index(String, String, String, Index){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18544) Create tests to cover appendPartition methods
[ https://issues.apache.org/jira/browse/HIVE-18544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343199#comment-16343199 ] Peter Vary commented on HIVE-18544: --- Fixed the review board link > Create tests to cover appendPartition methods > - > > Key: HIVE-18544 > URL: https://issues.apache.org/jira/browse/HIVE-18544 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18544.1.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - Partition appendPartition(String, String, List) > - Partition appendPartition(String, String, String){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18486) Create tests to cover add partition methods
[ https://issues.apache.org/jira/browse/HIVE-18486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-18486: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks for your work [~kuczoram] > Create tests to cover add partition methods > --- > > Key: HIVE-18486 > URL: https://issues.apache.org/jira/browse/HIVE-18486 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18486.1.patch, HIVE-18486.2.patch, > HIVE-18486.3.patch > > > The following methods of IMetaStoreClient are covered in this Jira: > {code:java} > - Partition add_partition(Partition) > - int add_partitions(List) > - List add_partitions(List, boolean, boolean){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18511) Fix generated checkstyle errors
[ https://issues.apache.org/jira/browse/HIVE-18511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343178#comment-16343178 ] Adam Szita commented on HIVE-18511: --- +1 non-binding, I can see all checkstyle errors are now fixes > Fix generated checkstyle errors > --- > > Key: HIVE-18511 > URL: https://issues.apache.org/jira/browse/HIVE-18511 > Project: Hive > Issue Type: Sub-task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-18511.patch > > > HIVE-18510 identified, that checkstyle was not running for test sources. > After running checkstyle several errors are identified -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18484) Create tests to cover listPartition(s) methods
[ https://issues.apache.org/jira/browse/HIVE-18484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343162#comment-16343162 ] Adam Szita commented on HIVE-18484: --- Thanks for reviewing [~kuczoram] and [~pvary] (and also committing :) ) > Create tests to cover listPartition(s) methods > -- > > Key: HIVE-18484 > URL: https://issues.apache.org/jira/browse/HIVE-18484 > Project: Hive > Issue Type: Sub-task >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18484.0.patch, HIVE-18484.1.patch, > HIVE-18484.2.patch, HIVE-18484.3.patch > > > Methods of IMetaStoreClient covered in this task are: > {code:java} > listPartitions(String,String,short) > listPartitions(String,String,List(String),short) > listPartitionSpecs(String,String,int) > listPartitionsWithAuthInfo(String,String,short,String,List(String)) > listPartitionsWithAuthInfo(String,String,List(String),short,String,List(String)) > listPartitionsByFilter(String,String,String,short) > listPartitionSpecsByFilter(String,String,String,int) > listPartitionNames(String,String,short) > listPartitionNames(String,String,List(String),short) > listPartitionValues(PartitionValuesRequest){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18483) Create tests to cover getPartition(s) methods
[ https://issues.apache.org/jira/browse/HIVE-18483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343160#comment-16343160 ] Adam Szita commented on HIVE-18483: --- Thanks for reviewing [~kuczoram] and [~pvary] (and also committing :) ) > Create tests to cover getPartition(s) methods > - > > Key: HIVE-18483 > URL: https://issues.apache.org/jira/browse/HIVE-18483 > Project: Hive > Issue Type: Sub-task >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18483.0.patch, HIVE-18483.1.patch, > HIVE-18483.2.patch, HIVE-18483.3.patch, HIVE-18483.4.patch > > > Methods of IMetaStoreClient covered in this task are: > {code:java} > getPartition(String,String,String) > getPartition(String,String,List(String)) > getPartitionsByNames(String,String,List(String)) > getPartitionWithAuthInfo(String,String,List(String),String,List(String)){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18468) Create tests to cover alterPartition and renamePartition methods
[ https://issues.apache.org/jira/browse/HIVE-18468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343159#comment-16343159 ] Adam Szita commented on HIVE-18468: --- Thanks for reviewing [~kuczoram] and [~pvary] (and also committing :) ) > Create tests to cover alterPartition and renamePartition methods > > > Key: HIVE-18468 > URL: https://issues.apache.org/jira/browse/HIVE-18468 > Project: Hive > Issue Type: Sub-task >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18468.0.patch, HIVE-18468.1.patch, > HIVE-18468.2.patch, HIVE-18468.3.patch > > > Methods of IMetaStoreClient covered in this task are: > {code:java} > alter_partition(String,String,Partition) > alter_partition(String,String,Partition,EnvironmentContext) > alter_partitions(String,String,List(Partition)) > alter_partitions(String,String,List(Partition),EnvironmentContext) > renamePartition(String,String,List(String),Partition){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18564) Add a mapper to make plan transformations more easily understandable
[ https://issues.apache.org/jira/browse/HIVE-18564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reassigned HIVE-18564: --- > Add a mapper to make plan transformations more easily understandable > > > Key: HIVE-18564 > URL: https://issues.apache.org/jira/browse/HIVE-18564 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > > This part is started as a small helper class to enable plan independent > mapping of runtime operator informations. But in reality its a bit different; > and might have its own different kind of usages. > Goals were: > * connect plan pieces which are responsible for the same part together; > currently I'm using it to connect RelNode, AST, Operator, RuntimeStats > * make it easy to attach new data > * make it easy to lookup some related information > This concept seems to be also usefull during writing tests; because it > enables the lookup of specific pieces like HiveFilter -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-15353) Metastore throws NPE if StorageDescriptor.cols is null
[ https://issues.apache.org/jira/browse/HIVE-15353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343057#comment-16343057 ] Hive QA commented on HIVE-15353: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12843060/HIVE-15353.4.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8906/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8906/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8906/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-01-29 08:23:25.253 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-8906/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-01-29 08:23:25.256 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 1dd863a..20ce1c6 master -> origin/master + git reset --hard HEAD HEAD is now at 1dd863a HIVE-18524: Vectorization: Execution failure related to non-standard embedding of IfExprConditionalFilter inside VectorUDFAdaptor (Revert HIVE-17139) (Matt McCline) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 2 commits, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 20ce1c6 HIVE-18495: JUnit rule to enable Driver level testing (Zoltan Haindrich reviewed by Ashutosh Chauhan) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-01-29 08:23:28.028 + rm -rf ../yetus + mkdir ../yetus + git gc + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-8906/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java: does not exist in index error: metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java: does not exist in index error: src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java: does not exist in index error: src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java: does not exist in index error: java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java: does not exist in index error: java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java: does not exist in index The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12843060 - PreCommit-HIVE-Build > Metastore throws NPE if StorageDescriptor.cols is null > -- > > Key: HIVE-15353 > URL: https://issues.apache.org/jira/browse/HIVE-15353 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0, 2.2.0 >Reporter: Anthony Hsu >Assignee: Anthony Hsu >Priority: Major > Attachments: HIVE-15353.1.patch, HIVE-15353.2.patch, > HIVE-15353.3.patch, HIVE-15353.4.patch > > > When using the HiveMetaStoreClient API directly to talk to the metastore, you > get NullPointerExceptions when StorageDescriptor.cols is null in the > Table/Partition object in the following calls: > * create_table > * alter_table > * alter_partition > Calling add_partition with StorageDescriptor.cols set to null causes null to > be stored in the metastore database and subsequent calls to alter_partition > for that partition to fail with an NPE. > Null checks should be added to eliminate the NPEs in the metastore. -- This message was sent by Atlas
[jira] [Commented] (HIVE-18048) Support Struct type with vectorization for Parquet file
[ https://issues.apache.org/jira/browse/HIVE-18048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343054#comment-16343054 ] Hive QA commented on HIVE-18048: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12908102/HIVE-18048.005.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 12633 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175) org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[generatehfiles_require_family_path] (batchId=246) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes] (batchId=163) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=180) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestGetPartitions.testGetPartitionWithAuthInfoNoDbName[Embedded] (batchId=207) org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8905/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8905/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8905/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 24 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12908102 - PreCommit-HIVE-Build > Support Struct type with vectorization for Parquet file > --- > > Key: HIVE-18048 > URL: https://issues.apache.org/jira/browse/HIVE-18048 > Project: Hive > Issue Type: Sub-task >Reporter: Colin Ma >Assignee: Colin Ma >Priority: Major > Attachments: HIVE-18048.001.patch, HIVE-18048.002.patch, > HIVE-18048.003.patch, HIVE-18048.004.patch, HIVE-18048.005.patch > > > Struct type is not supported in MapWork with vectorization, it should be > supported to improve the performance. > New UDF will be added to access the field of Struct. > Note: > * Filter operator won't be supported. -- This message was sent by Atlassian JIRA (v7.6.3#76005)