[jira] [Updated] (HIVE-13657) Spark driver stderr logs should appear in hive client logs
[ https://issues.apache.org/jira/browse/HIVE-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohit Sabharwal updated HIVE-13657: --- Status: Patch Available (was: Open) > Spark driver stderr logs should appear in hive client logs > -- > > Key: HIVE-13657 > URL: https://issues.apache.org/jira/browse/HIVE-13657 > Project: Hive > Issue Type: Bug >Reporter: Mohit Sabharwal >Assignee: Mohit Sabharwal > Attachments: HIVE-13657.patch > > > Currently, spark driver exceptions are not getting logged in beeline. > Instead, the users sees the not-so-useful: > {code} > ERROR : Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark > client.)' > > {code} > The user has to look at HS2 logs to discover the root cause: > {code} > 2015-04-01 11:33:16,048 INFO org.apache.hive.spark.client.SparkClientImpl: > 15/04/01 11:33:16 WARN UserGroupInformation: PriviledgedActionException > as:foo (auth:PROXY) via hive (auth:SIMPLE) > cause:org.apache.hadoop.security.AccessControlException: Permission denied: > user=foo, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x > ... > {code} > We should surface these critical errors in hive client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13657) Spark driver stderr logs should appear in hive client logs
[ https://issues.apache.org/jira/browse/HIVE-13657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohit Sabharwal updated HIVE-13657: --- Attachment: HIVE-13657.patch > Spark driver stderr logs should appear in hive client logs > -- > > Key: HIVE-13657 > URL: https://issues.apache.org/jira/browse/HIVE-13657 > Project: Hive > Issue Type: Bug >Reporter: Mohit Sabharwal >Assignee: Mohit Sabharwal > Attachments: HIVE-13657.patch > > > Currently, spark driver exceptions are not getting logged in beeline. > Instead, the users sees the not-so-useful: > {code} > ERROR : Failed to execute spark task, with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark > client.)' > > {code} > The user has to look at HS2 logs to discover the root cause: > {code} > 2015-04-01 11:33:16,048 INFO org.apache.hive.spark.client.SparkClientImpl: > 15/04/01 11:33:16 WARN UserGroupInformation: PriviledgedActionException > as:foo (auth:PROXY) via hive (auth:SIMPLE) > cause:org.apache.hadoop.security.AccessControlException: Permission denied: > user=foo, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x > ... > {code} > We should surface these critical errors in hive client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13390: Attachment: HIVE-13390.branch-1.2.patch Patch for branch-1.2 > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Fix For: 2.0.1 > > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, HIVE-13390.branch-1.2.patch, > HIVE-13390.branch-1.patch, keystore.jks, keystore_exampledotcom.jks, > truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11848) tables in subqueries don't get locked
[ https://issues.apache.org/jira/browse/HIVE-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268115#comment-15268115 ] Eugene Koifman commented on HIVE-11848: --- [~wzheng] could you review > tables in subqueries don't get locked > - > > Key: HIVE-11848 > URL: https://issues.apache.org/jira/browse/HIVE-11848 > Project: Hive > Issue Type: Bug > Components: Locking, Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-11848.patch > > > Consider > {noformat} > update acidTbl set b=19 where acidTbl.b in(select I.b from nonAcidOrcTbl I > where I.a = 3) > {noformat} > noAcidOrcTbl doesn't get locked at all. (SHARED_WRITE is taken on acidTbl). > Same for __delete__ with subquery > This is is because the ReadEntity for nonAcidOrcTbl is skipped by > {noformat} > for (ReadEntity input : plan.getInputs()) { > if (!input.needsLock() || input.isUpdateOrDelete()) { > // We don't want to acquire readlocks during update or delete as > we'll be acquiring write > // locks instead. > continue; > } > {noformat} > whatever sets isUpdateOrDelete() flag doesn't pay attention to whether the > table is written to or not. > HIVE-10150 was a similar issue, abstractly -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13390: Attachment: HIVE-13390.branch-1.patch Attaching patch for branch-1 > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Fix For: 2.0.1 > > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, HIVE-13390.branch-1.patch, > keystore.jks, keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13390: Target Version/s: 1.3.0, 1.2.2, 2.0.1 (was: 1.2.2, 2.0.1) > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Fix For: 2.0.1 > > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13661) [Refactor] Move common FS operations out of shim layer
[ https://issues.apache.org/jira/browse/HIVE-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13661: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Pushed to master. > [Refactor] Move common FS operations out of shim layer > -- > > Key: HIVE-13661 > URL: https://issues.apache.org/jira/browse/HIVE-13661 > Project: Hive > Issue Type: Improvement > Components: Shims >Affects Versions: 1.2.0, 2.0.0 >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Fix For: 2.1.0 > > Attachments: HIVE-13361.1.patch > > > Avoid overhead of extra function calls. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13632) Hive failing on insert empty array into parquet table
[ https://issues.apache.org/jira/browse/HIVE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-13632: Attachment: HIVE-13632.2.patch re-attach patch 2 with fixing size and length for empty map and empty list. > Hive failing on insert empty array into parquet table > - > > Key: HIVE-13632 > URL: https://issues.apache.org/jira/browse/HIVE-13632 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Affects Versions: 1.1.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen > Attachments: HIVE-13632.1.patch, HIVE-13632.2.patch > > > The insert will fail with following stack: > {noformat} > by: parquet.io.ParquetEncodingException: empty fields are illegal, the field > should be ommited completely instead > at > parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:271) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$ListDataWriter.write(DataWritableWriter.java:271) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.write(DataWritableWriter.java:199) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$MessageDataWriter.write(DataWritableWriter.java:215) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:88) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31) > at > parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:116) > at > parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123) > at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42) > at > org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111) > at > org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:697) > {noformat} > Reproduce: > {noformat} > create table test_small ( > key string, > arrayValues array) > stored as parquet; > insert into table test_small select 'abcd', array() from src limit 1; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13632) Hive failing on insert empty array into parquet table
[ https://issues.apache.org/jira/browse/HIVE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-13632: Attachment: (was: HIVE-13632.2.patch) > Hive failing on insert empty array into parquet table > - > > Key: HIVE-13632 > URL: https://issues.apache.org/jira/browse/HIVE-13632 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Affects Versions: 1.1.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen > Attachments: HIVE-13632.1.patch > > > The insert will fail with following stack: > {noformat} > by: parquet.io.ParquetEncodingException: empty fields are illegal, the field > should be ommited completely instead > at > parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:271) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$ListDataWriter.write(DataWritableWriter.java:271) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.write(DataWritableWriter.java:199) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$MessageDataWriter.write(DataWritableWriter.java:215) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:88) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31) > at > parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:116) > at > parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123) > at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42) > at > org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111) > at > org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:697) > {noformat} > Reproduce: > {noformat} > create table test_small ( > key string, > arrayValues array) > stored as parquet; > insert into table test_small select 'abcd', array() from src limit 1; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13598) Describe extended table should show the primary keys/foreign keys associated with the table
[ https://issues.apache.org/jira/browse/HIVE-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13598: - Status: Patch Available (was: Open) > Describe extended table should show the primary keys/foreign keys associated > with the table > --- > > Key: HIVE-13598 > URL: https://issues.apache.org/jira/browse/HIVE-13598 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13598.1.patch, HIVE-13598.2.patch, > HIVE-13598.3.patch, HIVE-13598.4.patch > > > After HIVE-13290 is committed, we need to show the constraints as part of > table description when extended label is used. Currently, the constraints > would not be shown as part of table description since Constraint is a > separate entity. > The purpose of the jira is to modify Hive.describeTable() to enable the user > to view the constraints associated with the table when the user does a > "describe extended table"; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12634) Add command to kill an ACID transacton
[ https://issues.apache.org/jira/browse/HIVE-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268031#comment-15268031 ] Hive QA commented on HIVE-12634: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12801636/HIVE-12634.6.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 67 failed/errored test(s), 10009 tests executed *Failed tests:* {noformat} TestHWISessionManager - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constantPropagateForSubQuery org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket5 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket6 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_disable_merge_for_bucketing org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join3 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join4 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5 org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefault org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefaultEmptyString org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfOverridden org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfUnknownPreperty org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithCommas org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithUnicode org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithValidPartVal org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithUnicode org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure
[jira] [Updated] (HIVE-13598) Describe extended table should show the primary keys/foreign keys associated with the table
[ https://issues.apache.org/jira/browse/HIVE-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13598: - Status: Open (was: Patch Available) > Describe extended table should show the primary keys/foreign keys associated > with the table > --- > > Key: HIVE-13598 > URL: https://issues.apache.org/jira/browse/HIVE-13598 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13598.1.patch, HIVE-13598.2.patch, > HIVE-13598.3.patch, HIVE-13598.4.patch > > > After HIVE-13290 is committed, we need to show the constraints as part of > table description when extended label is used. Currently, the constraints > would not be shown as part of table description since Constraint is a > separate entity. > The purpose of the jira is to modify Hive.describeTable() to enable the user > to view the constraints associated with the table when the user does a > "describe extended table"; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13598) Describe extended table should show the primary keys/foreign keys associated with the table
[ https://issues.apache.org/jira/browse/HIVE-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13598: - Attachment: HIVE-13598.4.patch > Describe extended table should show the primary keys/foreign keys associated > with the table > --- > > Key: HIVE-13598 > URL: https://issues.apache.org/jira/browse/HIVE-13598 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13598.1.patch, HIVE-13598.2.patch, > HIVE-13598.3.patch, HIVE-13598.4.patch > > > After HIVE-13290 is committed, we need to show the constraints as part of > table description when extended label is used. Currently, the constraints > would not be shown as part of table description since Constraint is a > separate entity. > The purpose of the jira is to modify Hive.describeTable() to enable the user > to view the constraints associated with the table when the user does a > "describe extended table"; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11848) tables in subqueries don't get locked
[ https://issues.apache.org/jira/browse/HIVE-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-11848: -- Status: Patch Available (was: Open) > tables in subqueries don't get locked > - > > Key: HIVE-11848 > URL: https://issues.apache.org/jira/browse/HIVE-11848 > Project: Hive > Issue Type: Bug > Components: Locking, Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-11848.patch > > > Consider > {noformat} > update acidTbl set b=19 where acidTbl.b in(select I.b from nonAcidOrcTbl I > where I.a = 3) > {noformat} > noAcidOrcTbl doesn't get locked at all. (SHARED_WRITE is taken on acidTbl). > Same for __delete__ with subquery > This is is because the ReadEntity for nonAcidOrcTbl is skipped by > {noformat} > for (ReadEntity input : plan.getInputs()) { > if (!input.needsLock() || input.isUpdateOrDelete()) { > // We don't want to acquire readlocks during update or delete as > we'll be acquiring write > // locks instead. > continue; > } > {noformat} > whatever sets isUpdateOrDelete() flag doesn't pay attention to whether the > table is written to or not. > HIVE-10150 was a similar issue, abstractly -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11848) tables in subqueries don't get locked
[ https://issues.apache.org/jira/browse/HIVE-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-11848: -- Attachment: HIVE-11848.patch > tables in subqueries don't get locked > - > > Key: HIVE-11848 > URL: https://issues.apache.org/jira/browse/HIVE-11848 > Project: Hive > Issue Type: Bug > Components: Locking, Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-11848.patch > > > Consider > {noformat} > update acidTbl set b=19 where acidTbl.b in(select I.b from nonAcidOrcTbl I > where I.a = 3) > {noformat} > noAcidOrcTbl doesn't get locked at all. (SHARED_WRITE is taken on acidTbl). > Same for __delete__ with subquery > This is is because the ReadEntity for nonAcidOrcTbl is skipped by > {noformat} > for (ReadEntity input : plan.getInputs()) { > if (!input.needsLock() || input.isUpdateOrDelete()) { > // We don't want to acquire readlocks during update or delete as > we'll be acquiring write > // locks instead. > continue; > } > {noformat} > whatever sets isUpdateOrDelete() flag doesn't pay attention to whether the > table is written to or not. > HIVE-10150 was a similar issue, abstractly -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13620) Merge llap branch work to master
[ https://issues.apache.org/jira/browse/HIVE-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-13620: -- Attachment: HIVE-13620.5.patch Merged master into llap again and created patch v5 with changes based on [~sseth]'s comments. > Merge llap branch work to master > > > Key: HIVE-13620 > URL: https://issues.apache.org/jira/browse/HIVE-13620 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-13620.1.patch, HIVE-13620.2.patch, > HIVE-13620.3.patch, HIVE-13620.4.patch, HIVE-13620.5.patch, > llap_master_diff.txt > > > Would like to try to merge the llap branch work for HIVE-12991 into the > master branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11550) ACID queries pollute HiveConf
[ https://issues.apache.org/jira/browse/HIVE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267892#comment-15267892 ] Eugene Koifman commented on HIVE-11550: --- I will have to modify this due to HIVE-13646 > ACID queries pollute HiveConf > - > > Key: HIVE-11550 > URL: https://issues.apache.org/jira/browse/HIVE-11550 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-11550.patch > > > HiveConf is a SessionState level object. Some ACID related logic makes > changes to it (which are meant to be per query) but become per SessionState. > See SemanticAnalyzer.checkAcidConstraints() > Also note HiveConf.setVar(conf, > HiveConf.ConfVars.DYNAMICPARTITIONINGMODE, "nonstrict"); > in UpdateDeleteSemancitAnalzyer > [~alangates], do you know of other cases or ideas on how to deal with this > differently? > _SortedDynPartitionOptimizer.process()_ is the place to have the logic to do > _conf.setBoolVar(ConfVars.HIVEOPTSORTDYNAMICPARTITION, false);_ on per query > basis -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11550) ACID queries pollute HiveConf
[ https://issues.apache.org/jira/browse/HIVE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-11550: -- Status: Open (was: Patch Available) > ACID queries pollute HiveConf > - > > Key: HIVE-11550 > URL: https://issues.apache.org/jira/browse/HIVE-11550 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-11550.patch > > > HiveConf is a SessionState level object. Some ACID related logic makes > changes to it (which are meant to be per query) but become per SessionState. > See SemanticAnalyzer.checkAcidConstraints() > Also note HiveConf.setVar(conf, > HiveConf.ConfVars.DYNAMICPARTITIONINGMODE, "nonstrict"); > in UpdateDeleteSemancitAnalzyer > [~alangates], do you know of other cases or ideas on how to deal with this > differently? > _SortedDynPartitionOptimizer.process()_ is the place to have the logic to do > _conf.setBoolVar(ConfVars.HIVEOPTSORTDYNAMICPARTITION, false);_ on per query > basis -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13458) Heartbeater doesn't fail query when heartbeat fails
[ https://issues.apache.org/jira/browse/HIVE-13458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267888#comment-15267888 ] Hive QA commented on HIVE-13458: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12801635/HIVE-13458.7.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 76 failed/errored test(s), 9940 tests executed *Failed tests:* {noformat} TestHWISessionManager - did not produce a TEST-*.xml file TestMiniTezCliDriver-order_null.q-vector_acid3.q-orc_merge10.q-and-12-more - did not produce a TEST-*.xml file TestMiniTezCliDriver-vector_distinct_2.q-tez_joins_explain.q-cte_mat_1.q-and-12-more - did not produce a TEST-*.xml file TestMiniTezCliDriver-vector_grouping_sets.q-mapjoin_mapjoin.q-update_all_partitioned.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket5 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket6 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_disable_merge_for_bucketing org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join3 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join4 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5 org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote.org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefault org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefaultEmptyString org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfOverridden org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfUnknownPreperty org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithUnicode org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithValidPartVal org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithUnicode org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener
[jira] [Commented] (HIVE-11880) filter bug of UNION ALL when hive.ppd.remove.duplicatefilters=true and filter condition is type incompatible column
[ https://issues.apache.org/jira/browse/HIVE-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267887#comment-15267887 ] WangMeng commented on HIVE-11880: - Thanks [~aihuaxu]. > filter bug of UNION ALL when hive.ppd.remove.duplicatefilters=true and > filter condition is type incompatible column > - > > Key: HIVE-11880 > URL: https://issues.apache.org/jira/browse/HIVE-11880 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 1.2.1 >Reporter: WangMeng >Assignee: WangMeng > Attachments: HIVE-11880.01.patch, HIVE-11880.02.patch, > HIVE-11880.03.patch, HIVE-11880.04.patch > > >For UNION ALL , when an union operator is constant column (such as '0L', > BIGINT Type) and its corresponding column has incompatible type (such as INT > type). > Query with filter condition on type incompatible column on this UNION ALL > will cause IndexOutOfBoundsException. > Such as TPC-H table "orders",in the following query: > Type of 'orders'.'o_custkey' is INT normally, while the type of > corresponding constant column "0" is BIGINT( `0L AS `o_custkey` ). > This following query (with filter "type incompatible column 'o_custkey' ") > will fail with java.lang.IndexOutOfBoundsException : > {code} > set hive.cbo.enable=false; > set hive.ppd.remove.duplicatefilters=true; > CREATE TABLE `orders`( > `o_orderkey` int, > `o_custkey` int, > `o_orderstatus` string, > `o_totalprice` double, > `o_orderdate` string, > `o_orderpriority` string, > `o_clerk` string, > `o_shippriority` int, > `o_comment` string); > SELECT o_orderkey > FROM ( > SELECT `o_orderkey` , > `o_custkey` > FROM `orders` > UNION ALL > SELECT `o_orderkey`, > 0L AS `o_custkey` > FROM `orders`) `oo` > WHERE o_custkey<10; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HIVE-13674) usingTezAm field not required in LLAP SubmitWorkRequestProto
[ https://issues.apache.org/jira/browse/HIVE-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere reopened HIVE-13674: --- Reverted in llap branch, will defer until later > usingTezAm field not required in LLAP SubmitWorkRequestProto > > > Key: HIVE-13674 > URL: https://issues.apache.org/jira/browse/HIVE-13674 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Fix For: llap > > Attachments: HIVE-13674.1.patch > > > From [~sseth] during review of HIVE-13620, the usingTezAm field is not needed > in SubmitWorkRequestProto. > Also, SendEventsRequestProto/SendEventsResponseProto are not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13568) Add UDFs to support column-masking
[ https://issues.apache.org/jira/browse/HIVE-13568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13568: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, [~madhan.neethiraj] > Add UDFs to support column-masking > -- > > Key: HIVE-13568 > URL: https://issues.apache.org/jira/browse/HIVE-13568 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Madhan Neethiraj >Assignee: Madhan Neethiraj > Fix For: 2.1.0 > > Attachments: HIVE-13568.1.patch, HIVE-13568.1.patch, > HIVE-13568.2.patch, HIVE-13568.3.patch, HIVE-13568.4.patch > > > HIVE-13125 added support to provide column-masking and row-filtering during > select via HiveAuthorizer interface. This JIRA is track addition of UDFs that > can be used by HiveAuthorizer implementations to mask column values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13674) usingTezAm field not required in LLAP SubmitWorkRequestProto
[ https://issues.apache.org/jira/browse/HIVE-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267856#comment-15267856 ] Jason Dere commented on HIVE-13674: --- ok .. would you prefer to revert this branch change and defer until after the merge and HIVE-13442 is in? > usingTezAm field not required in LLAP SubmitWorkRequestProto > > > Key: HIVE-13674 > URL: https://issues.apache.org/jira/browse/HIVE-13674 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Fix For: llap > > Attachments: HIVE-13674.1.patch > > > From [~sseth] during review of HIVE-13620, the usingTezAm field is not needed > in SubmitWorkRequestProto. > Also, SendEventsRequestProto/SendEventsResponseProto are not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13675) LLAP: add HMAC signatures to LLAPIF splits
[ https://issues.apache.org/jira/browse/HIVE-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267853#comment-15267853 ] Sergey Shelukhin commented on HIVE-13675: - I have some old code changes based on HIVE-13444. Will wait for merge > LLAP: add HMAC signatures to LLAPIF splits > -- > > Key: HIVE-13675 > URL: https://issues.apache.org/jira/browse/HIVE-13675 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13444) LLAP: add HMAC signatures to LLAP; verify them on LLAP side
[ https://issues.apache.org/jira/browse/HIVE-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13444: Attachment: HIVE-13444.WIP.patch backup of the patch on top of HIVE-13442 > LLAP: add HMAC signatures to LLAP; verify them on LLAP side > --- > > Key: HIVE-13444 > URL: https://issues.apache.org/jira/browse/HIVE-13444 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13444.WIP.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13674) usingTezAm field not required in LLAP SubmitWorkRequestProto
[ https://issues.apache.org/jira/browse/HIVE-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267847#comment-15267847 ] Sergey Shelukhin commented on HIVE-13674: - HIVE-13442 changes the API a lot... is it possible to avoid non-essential changes right now > usingTezAm field not required in LLAP SubmitWorkRequestProto > > > Key: HIVE-13674 > URL: https://issues.apache.org/jira/browse/HIVE-13674 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Fix For: llap > > Attachments: HIVE-13674.1.patch > > > From [~sseth] during review of HIVE-13620, the usingTezAm field is not needed > in SubmitWorkRequestProto. > Also, SendEventsRequestProto/SendEventsResponseProto are not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13674) usingTezAm field not required in LLAP SubmitWorkRequestProto
[ https://issues.apache.org/jira/browse/HIVE-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere resolved HIVE-13674. --- Resolution: Fixed Assignee: Jason Dere Fix Version/s: llap committed to llap branch > usingTezAm field not required in LLAP SubmitWorkRequestProto > > > Key: HIVE-13674 > URL: https://issues.apache.org/jira/browse/HIVE-13674 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Fix For: llap > > Attachments: HIVE-13674.1.patch > > > From [~sseth] during review of HIVE-13620, the usingTezAm field is not needed > in SubmitWorkRequestProto. > Also, SendEventsRequestProto/SendEventsResponseProto are not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13674) usingTezAm field not required in LLAP SubmitWorkRequestProto
[ https://issues.apache.org/jira/browse/HIVE-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-13674: -- Attachment: HIVE-13674.1.patch > usingTezAm field not required in LLAP SubmitWorkRequestProto > > > Key: HIVE-13674 > URL: https://issues.apache.org/jira/browse/HIVE-13674 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere > Attachments: HIVE-13674.1.patch > > > From [~sseth] during review of HIVE-13620, the usingTezAm field is not needed > in SubmitWorkRequestProto. > Also, SendEventsRequestProto/SendEventsResponseProto are not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13235) Insert from select generates incorrect result when hive.optimize.constant.propagation is on
[ https://issues.apache.org/jira/browse/HIVE-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267830#comment-15267830 ] Pengcheng Xiong commented on HIVE-13235: [~ashutoshc], thanks for your comments. I totally agree with you. I just briefly reviewed [~aihuaxu]'s patch and i think the main difference is that his patch is improving the tableAlias/colAlias matching and my patch is completely dropping the tableAlias/colAlias matching method. > Insert from select generates incorrect result when > hive.optimize.constant.propagation is on > --- > > Key: HIVE-13235 > URL: https://issues.apache.org/jira/browse/HIVE-13235 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13235.1.patch, HIVE-13235.2.patch, > HIVE-13235.3.patch, HIVE-13235.4.patch > > > The following query returns incorrect result when constant optimization is > turned on. The subquery happens to have an alias p1 to be the same as the > input partition name. Constant optimizer will optimize it incorrectly as the > constant. > When constant optimizer is turned off, we will get the correct result. > {noformat} > set hive.cbo.enable=false; > set hive.optimize.constant.propagation = true; > create table t1(c1 string, c2 double) partitioned by (p1 string, p2 string); > create table t2(p1 double, c2 string); > insert into table t1 partition(p1='40', p2='p2') values('c1', 0.0); > INSERT OVERWRITE TABLE t2 select if((c2 = 0.0), c2, '0') as p1, 2 as p2 from > t1 where c1 = 'c1' and p1 = '40'; > select * from t2; > 40 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13675) LLAP: add HMAC signatures to LLAPIF splits
[ https://issues.apache.org/jira/browse/HIVE-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13675: Assignee: (was: Sergey Shelukhin) > LLAP: add HMAC signatures to LLAPIF splits > -- > > Key: HIVE-13675 > URL: https://issues.apache.org/jira/browse/HIVE-13675 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13444) LLAP: add HMAC signatures to LLAP; verify them on LLAP side
[ https://issues.apache.org/jira/browse/HIVE-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13444: Summary: LLAP: add HMAC signatures to LLAP; verify them on LLAP side (was: LLAP: add HMAC signatures to LLAPIF splits; verify them on LLAP side) > LLAP: add HMAC signatures to LLAP; verify them on LLAP side > --- > > Key: HIVE-13444 > URL: https://issues.apache.org/jira/browse/HIVE-13444 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13354) Add ability to specify Compaction options per table and per request
[ https://issues.apache.org/jira/browse/HIVE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13354: - Attachment: HIVE-13354.1.withoutSchemaChange.patch > Add ability to specify Compaction options per table and per request > --- > > Key: HIVE-13354 > URL: https://issues.apache.org/jira/browse/HIVE-13354 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.3.0, 2.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Labels: TODOC2.1 > Attachments: HIVE-13354.1.withoutSchemaChange.patch > > > Currently the are a few options that determine when automatic compaction is > triggered. They are specified once for the warehouse. > This doesn't make sense - some table may be more important and need to be > compacted more often. > We should allow specifying these on per table basis. > Also, compaction is an MR job launched from within the metastore. There is > currently no way to control job parameters (like memory, for example) except > to specify it in hive-site.xml for metastore which means they are site wide. > Should add a way to specify these per table (perhaps even per compaction if > launched via ALTER TABLE) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13354) Add ability to specify Compaction options per table and per request
[ https://issues.apache.org/jira/browse/HIVE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13354: - Attachment: (was: HIVE-13354.1.withoutSchemaChange.patch) > Add ability to specify Compaction options per table and per request > --- > > Key: HIVE-13354 > URL: https://issues.apache.org/jira/browse/HIVE-13354 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.3.0, 2.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Labels: TODOC2.1 > Attachments: HIVE-13354.1.withoutSchemaChange.patch > > > Currently the are a few options that determine when automatic compaction is > triggered. They are specified once for the warehouse. > This doesn't make sense - some table may be more important and need to be > compacted more often. > We should allow specifying these on per table basis. > Also, compaction is an MR job launched from within the metastore. There is > currently no way to control job parameters (like memory, for example) except > to specify it in hive-site.xml for metastore which means they are site wide. > Should add a way to specify these per table (perhaps even per compaction if > launched via ALTER TABLE) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13235) Insert from select generates incorrect result when hive.optimize.constant.propagation is on
[ https://issues.apache.org/jira/browse/HIVE-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267815#comment-15267815 ] Ashutosh Chauhan commented on HIVE-13235: - Thanks [~pxiong] for testing this out. So, it seems we only need one patch to solve these 2 problems. I haven't looked at either patch yet but seems like we can commit either of these. [~aihuaxu] What do you think? > Insert from select generates incorrect result when > hive.optimize.constant.propagation is on > --- > > Key: HIVE-13235 > URL: https://issues.apache.org/jira/browse/HIVE-13235 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13235.1.patch, HIVE-13235.2.patch, > HIVE-13235.3.patch, HIVE-13235.4.patch > > > The following query returns incorrect result when constant optimization is > turned on. The subquery happens to have an alias p1 to be the same as the > input partition name. Constant optimizer will optimize it incorrectly as the > constant. > When constant optimizer is turned off, we will get the correct result. > {noformat} > set hive.cbo.enable=false; > set hive.optimize.constant.propagation = true; > create table t1(c1 string, c2 double) partitioned by (p1 string, p2 string); > create table t2(p1 double, c2 string); > insert into table t1 partition(p1='40', p2='p2') values('c1', 0.0); > INSERT OVERWRITE TABLE t2 select if((c2 = 0.0), c2, '0') as p1, 2 as p2 from > t1 where c1 = 'c1' and p1 = '40'; > select * from t2; > 40 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11417) Create shims for the row by row read path that is backed by VectorizedRowBatch
[ https://issues.apache.org/jira/browse/HIVE-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267802#comment-15267802 ] Sergey Shelukhin commented on HIVE-11417: - Looks like some of the test failures are related > Create shims for the row by row read path that is backed by VectorizedRowBatch > -- > > Key: HIVE-11417 > URL: https://issues.apache.org/jira/browse/HIVE-11417 > Project: Hive > Issue Type: Sub-task >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Fix For: 2.1.0 > > Attachments: HIVE-11417.patch, HIVE-11417.patch, HIVE-11417.patch, > HIVE-11417.patch > > > I'd like to make the default path for reading and writing ORC files to be > vectorized. To ensure that Hive can still read row by row, we'll need shims > to support the old API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13235) Insert from select generates incorrect result when hive.optimize.constant.propagation is on
[ https://issues.apache.org/jira/browse/HIVE-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267800#comment-15267800 ] Pengcheng Xiong commented on HIVE-13235: [~ashutoshc], i just checked the problem that [~aihuaxu] mentioned in this jira. It seems that it is quite related to HIVE-13602. I also test the problem in this jira and it disappears with the patch in HIVE-13602. > Insert from select generates incorrect result when > hive.optimize.constant.propagation is on > --- > > Key: HIVE-13235 > URL: https://issues.apache.org/jira/browse/HIVE-13235 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13235.1.patch, HIVE-13235.2.patch, > HIVE-13235.3.patch, HIVE-13235.4.patch > > > The following query returns incorrect result when constant optimization is > turned on. The subquery happens to have an alias p1 to be the same as the > input partition name. Constant optimizer will optimize it incorrectly as the > constant. > When constant optimizer is turned off, we will get the correct result. > {noformat} > set hive.cbo.enable=false; > set hive.optimize.constant.propagation = true; > create table t1(c1 string, c2 double) partitioned by (p1 string, p2 string); > create table t2(p1 double, c2 string); > insert into table t1 partition(p1='40', p2='p2') values('c1', 0.0); > INSERT OVERWRITE TABLE t2 select if((c2 = 0.0), c2, '0') as p1, 2 as p2 from > t1 where c1 = 'c1' and p1 = '40'; > select * from t2; > 40 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13354) Add ability to specify Compaction options per table and per request
[ https://issues.apache.org/jira/browse/HIVE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Zheng updated HIVE-13354: - Attachment: HIVE-13354.1.withoutSchemaChange.patch Upload patch 1, without schema changes in the upgrade scripts. > Add ability to specify Compaction options per table and per request > --- > > Key: HIVE-13354 > URL: https://issues.apache.org/jira/browse/HIVE-13354 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.3.0, 2.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Labels: TODOC2.1 > Attachments: HIVE-13354.1.withoutSchemaChange.patch > > > Currently the are a few options that determine when automatic compaction is > triggered. They are specified once for the warehouse. > This doesn't make sense - some table may be more important and need to be > compacted more often. > We should allow specifying these on per table basis. > Also, compaction is an MR job launched from within the metastore. There is > currently no way to control job parameters (like memory, for example) except > to specify it in hive-site.xml for metastore which means they are site wide. > Should add a way to specify these per table (perhaps even per compaction if > launched via ALTER TABLE) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13354) Add ability to specify Compaction options per table and per request
[ https://issues.apache.org/jira/browse/HIVE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267792#comment-15267792 ] Wei Zheng commented on HIVE-13354: -- New usages after this improvement. - Allow new tblproperties on DDL. - Specify compactor MR job properties. e.g. CREATE TABLE t1 ... TBLPROPERTIES ('compactor.mapreduce.map.memory.mb'='1024'); - Specify compactor thresholds for triggering compaction (currently, hive.compactor.delta.num.threshold and hive.compactor.delta.pct.threshold). e.g. CREATE TABLE t1 ... TBLPROPERTIES ('compactorthreshold.hive.compactor.delta.num.threshold'='5'); - Allow tblproperties on ALTER TABLE .. COMPACT. - Speficy compactor MR job properties or other hive properties. ALTER TABLE t1 ... COMPACT ... WITH OVERWRITE TBLPROPERTIES ('compactor.mapreduce.map.memory.mb'='1024', 'tblprops.orc.compress.size'='8192'); > Add ability to specify Compaction options per table and per request > --- > > Key: HIVE-13354 > URL: https://issues.apache.org/jira/browse/HIVE-13354 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.3.0, 2.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Labels: TODOC2.1 > > Currently the are a few options that determine when automatic compaction is > triggered. They are specified once for the warehouse. > This doesn't make sense - some table may be more important and need to be > compacted more often. > We should allow specifying these on per table basis. > Also, compaction is an MR job launched from within the metastore. There is > currently no way to control job parameters (like memory, for example) except > to specify it in hive-site.xml for metastore which means they are site wide. > Should add a way to specify these per table (perhaps even per compaction if > launched via ALTER TABLE) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC
[ https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267781#comment-15267781 ] Sergey Shelukhin commented on HIVE-9660: Hmm. I see, the main difference is that one could track the finished RGs and record the length at the end based on stream position, instead of tracking all the length changes attributed to the RG while it's active... this will change the set-of-active-rgs to set-of-just-finished-rgs (of which there can still be several per CB, or RL block), and move tracking logic around to different places. The dictionary stuff will still have to be there because the direct/dictionary flush each write streams that are separated into RGs out of sync with the main writer (data+length for direct, data for dictionary). I am not sure if it's worth it at this point... I could change the existing patch to do that, or do it in separate JIRA later. If you want to do it from scratch that also works ;) > store end offset of compressed data for RG in RowIndex in ORC > - > > Key: HIVE-9660 > URL: https://issues.apache.org/jira/browse/HIVE-9660 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-9660.01.patch, HIVE-9660.02.patch, > HIVE-9660.03.patch, HIVE-9660.04.patch, HIVE-9660.05.patch, > HIVE-9660.06.patch, HIVE-9660.07.patch, HIVE-9660.07.patch, > HIVE-9660.08.patch, HIVE-9660.09.patch, HIVE-9660.10.patch, > HIVE-9660.10.patch, HIVE-9660.11.patch, HIVE-9660.patch, HIVE-9660.patch > > > Right now the end offset is estimated, which in some cases results in tons of > extra data being read. > We can add a separate array to RowIndex (positions_v2?) that stores number of > compressed buffers for each RG, or end offset, or something, to remove this > estimation magic -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12878) Support Vectorization for TEXTFILE and other formats
[ https://issues.apache.org/jira/browse/HIVE-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-12878: Resolution: Fixed Status: Resolved (was: Patch Available) > Support Vectorization for TEXTFILE and other formats > > > Key: HIVE-12878 > URL: https://issues.apache.org/jira/browse/HIVE-12878 > Project: Hive > Issue Type: New Feature > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 2.1.0 > > Attachments: HIVE-12878.01.patch, HIVE-12878.02.patch, > HIVE-12878.03.patch, HIVE-12878.04.patch, HIVE-12878.05.patch, > HIVE-12878.06.patch, HIVE-12878.07.patch, HIVE-12878.08.patch, > HIVE-12878.09.patch, HIVE-12878.091.patch, HIVE-12878.092.patch, > HIVE-12878.093.patch > > > Support vectorizing when the input format is TEXTFILE and other formats for > better Map Vertex performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12878) Support Vectorization for TEXTFILE and other formats
[ https://issues.apache.org/jira/browse/HIVE-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267772#comment-15267772 ] Matt McCline commented on HIVE-12878: - Committed to master. > Support Vectorization for TEXTFILE and other formats > > > Key: HIVE-12878 > URL: https://issues.apache.org/jira/browse/HIVE-12878 > Project: Hive > Issue Type: New Feature > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 2.1.0 > > Attachments: HIVE-12878.01.patch, HIVE-12878.02.patch, > HIVE-12878.03.patch, HIVE-12878.04.patch, HIVE-12878.05.patch, > HIVE-12878.06.patch, HIVE-12878.07.patch, HIVE-12878.08.patch, > HIVE-12878.09.patch, HIVE-12878.091.patch, HIVE-12878.092.patch, > HIVE-12878.093.patch > > > Support vectorizing when the input format is TEXTFILE and other formats for > better Map Vertex performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12878) Support Vectorization for TEXTFILE and other formats
[ https://issues.apache.org/jira/browse/HIVE-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-12878: Fix Version/s: 2.1.0 > Support Vectorization for TEXTFILE and other formats > > > Key: HIVE-12878 > URL: https://issues.apache.org/jira/browse/HIVE-12878 > Project: Hive > Issue Type: New Feature > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 2.1.0 > > Attachments: HIVE-12878.01.patch, HIVE-12878.02.patch, > HIVE-12878.03.patch, HIVE-12878.04.patch, HIVE-12878.05.patch, > HIVE-12878.06.patch, HIVE-12878.07.patch, HIVE-12878.08.patch, > HIVE-12878.09.patch, HIVE-12878.091.patch, HIVE-12878.092.patch, > HIVE-12878.093.patch > > > Support vectorizing when the input format is TEXTFILE and other formats for > better Map Vertex performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12878) Support Vectorization for TEXTFILE and other formats
[ https://issues.apache.org/jira/browse/HIVE-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267769#comment-15267769 ] Matt McCline commented on HIVE-12878: - New test failures (Age = 1) from ptest run are on HIVE-12878.093.patch are: {code} org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefault 10 sec 1 org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefaultEmptyString 10 sec 1 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby10 15 sec 1 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_cast_constant 17 sec 1 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_14 8.3 sec 1 {code} I run the TestSparkCliDriver tests on my laptop and they succeeded. TestHiveMetaStoreGetMetaConf failures appear unrelated. > Support Vectorization for TEXTFILE and other formats > > > Key: HIVE-12878 > URL: https://issues.apache.org/jira/browse/HIVE-12878 > Project: Hive > Issue Type: New Feature > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-12878.01.patch, HIVE-12878.02.patch, > HIVE-12878.03.patch, HIVE-12878.04.patch, HIVE-12878.05.patch, > HIVE-12878.06.patch, HIVE-12878.07.patch, HIVE-12878.08.patch, > HIVE-12878.09.patch, HIVE-12878.091.patch, HIVE-12878.092.patch, > HIVE-12878.093.patch > > > Support vectorizing when the input format is TEXTFILE and other formats for > better Map Vertex performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13390: Target Version/s: 1.2.2, 2.0.1 (was: 2.0.1) > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Fix For: 2.0.1 > > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267732#comment-15267732 ] Vaibhav Gumashta commented on HIVE-13390: - Committed to branch-2.0. > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Fix For: 2.0.1 > > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13546) Patch for HIVE-12893 is broken in branch-1
[ https://issues.apache.org/jira/browse/HIVE-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran resolved HIVE-13546. -- Resolution: Fixed Fix Version/s: 1.3.0 Committed patch to branch-1. [~nemon] Thanks for the contribution! > Patch for HIVE-12893 is broken in branch-1 > --- > > Key: HIVE-13546 > URL: https://issues.apache.org/jira/browse/HIVE-13546 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 1.3.0 >Reporter: Nemon Lou >Assignee: Nemon Lou > Fix For: 1.3.0 > > Attachments: HIVE-13546.patch > > > The following sql fails: > {noformat} > set hive.map.aggr=true; > set mapreduce.reduce.speculative=false; > set hive.auto.convert.join=true; > set hive.optimize.reducededuplication = false; > set hive.optimize.reducededuplication.min.reducer=1; > set hive.optimize.mapjoin.mapreduce=true; > set hive.stats.autogather=true; > set mapred.reduce.parallel.copies=30; > set mapred.job.shuffle.input.buffer.percent=0.5; > set mapred.job.reduce.input.buffer.percent=0.2; > set mapred.map.child.java.opts=-server -Xmx2800m > -Djava.net.preferIPv4Stack=true; > set mapred.reduce.child.java.opts=-server -Xmx3800m > -Djava.net.preferIPv4Stack=true; > set mapreduce.map.memory.mb=3072; > set mapreduce.reduce.memory.mb=4096; > set hive.enforce.bucketing=true; > set hive.enforce.sorting=true; > set hive.exec.dynamic.partition.mode=nonstrict; > set hive.exec.max.dynamic.partitions.pernode=10; > set hive.exec.max.dynamic.partitions=10; > set hive.exec.max.created.files=100; > set hive.exec.parallel=true; > set hive.exec.reducers.max=2000; > set hive.stats.autogather=true; > set hive.optimize.sort.dynamic.partition=true; > set mapred.job.reduce.input.buffer.percent=0.0; > set mapreduce.input.fileinputformat.split.minsizee=24000; > set mapreduce.input.fileinputformat.split.minsize.per.node=24000; > set mapreduce.input.fileinputformat.split.minsize.per.rack=24000; > set hive.optimize.sort.dynamic.partition=true; > use tpcds_bin_partitioned_orc_4; > insert overwrite table store_sales partition (ss_sold_date_sk) > select > ss.ss_sold_time_sk, > ss.ss_item_sk, > ss.ss_customer_sk, > ss.ss_cdemo_sk, > ss.ss_hdemo_sk, > ss.ss_addr_sk, > ss.ss_store_sk, > ss.ss_promo_sk, > ss.ss_ticket_number, > ss.ss_quantity, > ss.ss_wholesale_cost, > ss.ss_list_price, > ss.ss_sales_price, > ss.ss_ext_discount_amt, > ss.ss_ext_sales_price, > ss.ss_ext_wholesale_cost, > ss.ss_ext_list_price, > ss.ss_ext_tax, > ss.ss_coupon_amt, > ss.ss_net_paid, > ss.ss_net_paid_inc_tax, > ss.ss_net_profit, > ss.ss_sold_date_sk > from tpcds_text_4.store_sales ss; > {noformat} > Error log is as follows > {noformat} > 2016-04-19 15:15:35,252 FATAL [main] ExecReducer: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row (tag=0) > {"key":{"reducesinkkey0":null},"value":{"_col0":null,"_col1":5588,"_col2":170300,"_col3":null,"_col4":756,"_col5":91384,"_col6":16,"_col7":null,"_col8":855582,"_col9":28,"_col10":null,"_col11":48.83,"_col12":null,"_col13":0.0,"_col14":null,"_col15":899.64,"_col16":null,"_col17":6.14,"_col18":0.0,"_col19":null,"_col20":null,"_col21":null,"_col22":null}} > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:244) > at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1732) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174) > Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at java.util.ArrayList.rangeCheck(ArrayList.java:653) > at java.util.ArrayList.get(ArrayList.java:429) > at > org.apache.hadoop.hive.common.FileUtils.makePartName(FileUtils.java:151) > at > org.apache.hadoop.hive.common.FileUtils.makePartName(FileUtils.java:131) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynPartDirectory(FileSinkOperator.java:1003) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutPaths(FileSinkOperator.java:919) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:713) > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:235) >
[jira] [Commented] (HIVE-13546) Patch for HIVE-12893 is broken in branch-1
[ https://issues.apache.org/jira/browse/HIVE-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267703#comment-15267703 ] Prasanth Jayachandran commented on HIVE-13546: -- +1. [~nemon] Thanks for looking into this! > Patch for HIVE-12893 is broken in branch-1 > --- > > Key: HIVE-13546 > URL: https://issues.apache.org/jira/browse/HIVE-13546 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 1.3.0 >Reporter: Nemon Lou > Attachments: HIVE-13546.patch > > > The following sql fails: > {noformat} > set hive.map.aggr=true; > set mapreduce.reduce.speculative=false; > set hive.auto.convert.join=true; > set hive.optimize.reducededuplication = false; > set hive.optimize.reducededuplication.min.reducer=1; > set hive.optimize.mapjoin.mapreduce=true; > set hive.stats.autogather=true; > set mapred.reduce.parallel.copies=30; > set mapred.job.shuffle.input.buffer.percent=0.5; > set mapred.job.reduce.input.buffer.percent=0.2; > set mapred.map.child.java.opts=-server -Xmx2800m > -Djava.net.preferIPv4Stack=true; > set mapred.reduce.child.java.opts=-server -Xmx3800m > -Djava.net.preferIPv4Stack=true; > set mapreduce.map.memory.mb=3072; > set mapreduce.reduce.memory.mb=4096; > set hive.enforce.bucketing=true; > set hive.enforce.sorting=true; > set hive.exec.dynamic.partition.mode=nonstrict; > set hive.exec.max.dynamic.partitions.pernode=10; > set hive.exec.max.dynamic.partitions=10; > set hive.exec.max.created.files=100; > set hive.exec.parallel=true; > set hive.exec.reducers.max=2000; > set hive.stats.autogather=true; > set hive.optimize.sort.dynamic.partition=true; > set mapred.job.reduce.input.buffer.percent=0.0; > set mapreduce.input.fileinputformat.split.minsizee=24000; > set mapreduce.input.fileinputformat.split.minsize.per.node=24000; > set mapreduce.input.fileinputformat.split.minsize.per.rack=24000; > set hive.optimize.sort.dynamic.partition=true; > use tpcds_bin_partitioned_orc_4; > insert overwrite table store_sales partition (ss_sold_date_sk) > select > ss.ss_sold_time_sk, > ss.ss_item_sk, > ss.ss_customer_sk, > ss.ss_cdemo_sk, > ss.ss_hdemo_sk, > ss.ss_addr_sk, > ss.ss_store_sk, > ss.ss_promo_sk, > ss.ss_ticket_number, > ss.ss_quantity, > ss.ss_wholesale_cost, > ss.ss_list_price, > ss.ss_sales_price, > ss.ss_ext_discount_amt, > ss.ss_ext_sales_price, > ss.ss_ext_wholesale_cost, > ss.ss_ext_list_price, > ss.ss_ext_tax, > ss.ss_coupon_amt, > ss.ss_net_paid, > ss.ss_net_paid_inc_tax, > ss.ss_net_profit, > ss.ss_sold_date_sk > from tpcds_text_4.store_sales ss; > {noformat} > Error log is as follows > {noformat} > 2016-04-19 15:15:35,252 FATAL [main] ExecReducer: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row (tag=0) > {"key":{"reducesinkkey0":null},"value":{"_col0":null,"_col1":5588,"_col2":170300,"_col3":null,"_col4":756,"_col5":91384,"_col6":16,"_col7":null,"_col8":855582,"_col9":28,"_col10":null,"_col11":48.83,"_col12":null,"_col13":0.0,"_col14":null,"_col15":899.64,"_col16":null,"_col17":6.14,"_col18":0.0,"_col19":null,"_col20":null,"_col21":null,"_col22":null}} > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:244) > at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1732) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174) > Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at java.util.ArrayList.rangeCheck(ArrayList.java:653) > at java.util.ArrayList.get(ArrayList.java:429) > at > org.apache.hadoop.hive.common.FileUtils.makePartName(FileUtils.java:151) > at > org.apache.hadoop.hive.common.FileUtils.makePartName(FileUtils.java:131) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynPartDirectory(FileSinkOperator.java:1003) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutPaths(FileSinkOperator.java:919) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:713) > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:235) > ... 7 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13546) Patch for HIVE-12893 is broken in branch-1
[ https://issues.apache.org/jira/browse/HIVE-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13546: - Assignee: Nemon Lou > Patch for HIVE-12893 is broken in branch-1 > --- > > Key: HIVE-13546 > URL: https://issues.apache.org/jira/browse/HIVE-13546 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 1.3.0 >Reporter: Nemon Lou >Assignee: Nemon Lou > Attachments: HIVE-13546.patch > > > The following sql fails: > {noformat} > set hive.map.aggr=true; > set mapreduce.reduce.speculative=false; > set hive.auto.convert.join=true; > set hive.optimize.reducededuplication = false; > set hive.optimize.reducededuplication.min.reducer=1; > set hive.optimize.mapjoin.mapreduce=true; > set hive.stats.autogather=true; > set mapred.reduce.parallel.copies=30; > set mapred.job.shuffle.input.buffer.percent=0.5; > set mapred.job.reduce.input.buffer.percent=0.2; > set mapred.map.child.java.opts=-server -Xmx2800m > -Djava.net.preferIPv4Stack=true; > set mapred.reduce.child.java.opts=-server -Xmx3800m > -Djava.net.preferIPv4Stack=true; > set mapreduce.map.memory.mb=3072; > set mapreduce.reduce.memory.mb=4096; > set hive.enforce.bucketing=true; > set hive.enforce.sorting=true; > set hive.exec.dynamic.partition.mode=nonstrict; > set hive.exec.max.dynamic.partitions.pernode=10; > set hive.exec.max.dynamic.partitions=10; > set hive.exec.max.created.files=100; > set hive.exec.parallel=true; > set hive.exec.reducers.max=2000; > set hive.stats.autogather=true; > set hive.optimize.sort.dynamic.partition=true; > set mapred.job.reduce.input.buffer.percent=0.0; > set mapreduce.input.fileinputformat.split.minsizee=24000; > set mapreduce.input.fileinputformat.split.minsize.per.node=24000; > set mapreduce.input.fileinputformat.split.minsize.per.rack=24000; > set hive.optimize.sort.dynamic.partition=true; > use tpcds_bin_partitioned_orc_4; > insert overwrite table store_sales partition (ss_sold_date_sk) > select > ss.ss_sold_time_sk, > ss.ss_item_sk, > ss.ss_customer_sk, > ss.ss_cdemo_sk, > ss.ss_hdemo_sk, > ss.ss_addr_sk, > ss.ss_store_sk, > ss.ss_promo_sk, > ss.ss_ticket_number, > ss.ss_quantity, > ss.ss_wholesale_cost, > ss.ss_list_price, > ss.ss_sales_price, > ss.ss_ext_discount_amt, > ss.ss_ext_sales_price, > ss.ss_ext_wholesale_cost, > ss.ss_ext_list_price, > ss.ss_ext_tax, > ss.ss_coupon_amt, > ss.ss_net_paid, > ss.ss_net_paid_inc_tax, > ss.ss_net_profit, > ss.ss_sold_date_sk > from tpcds_text_4.store_sales ss; > {noformat} > Error log is as follows > {noformat} > 2016-04-19 15:15:35,252 FATAL [main] ExecReducer: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row (tag=0) > {"key":{"reducesinkkey0":null},"value":{"_col0":null,"_col1":5588,"_col2":170300,"_col3":null,"_col4":756,"_col5":91384,"_col6":16,"_col7":null,"_col8":855582,"_col9":28,"_col10":null,"_col11":48.83,"_col12":null,"_col13":0.0,"_col14":null,"_col15":899.64,"_col16":null,"_col17":6.14,"_col18":0.0,"_col19":null,"_col20":null,"_col21":null,"_col22":null}} > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:244) > at > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:180) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1732) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:174) > Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at java.util.ArrayList.rangeCheck(ArrayList.java:653) > at java.util.ArrayList.get(ArrayList.java:429) > at > org.apache.hadoop.hive.common.FileUtils.makePartName(FileUtils.java:151) > at > org.apache.hadoop.hive.common.FileUtils.makePartName(FileUtils.java:131) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynPartDirectory(FileSinkOperator.java:1003) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutPaths(FileSinkOperator.java:919) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:713) > at > org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:235) > ... 7 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13442) LLAP: refactor submit API to be amenable to signing
[ https://issues.apache.org/jira/browse/HIVE-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13442: Attachment: HIVE-13442.01.patch Moved one log line up... > LLAP: refactor submit API to be amenable to signing > --- > > Key: HIVE-13442 > URL: https://issues.apache.org/jira/browse/HIVE-13442 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13442.01.patch, HIVE-13442.nogen.patch, > HIVE-13442.patch, HIVE-13442.patch, HIVE-13442.protobuf.patch > > > This is going to be a wire compat breaking change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13671) Add PerfLogger to log4j2.properties logger
[ https://issues.apache.org/jira/browse/HIVE-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267669#comment-15267669 ] Sergey Shelukhin commented on HIVE-13671: - hmm, is this different from --hiveconf? hiveconf is used for HiveConf stuff, which may not be validated on that path, but is validated at least on some other paths. It should probably be validated if not. But, as long as it works now, oh well. +1 > Add PerfLogger to log4j2.properties logger > -- > > Key: HIVE-13671 > URL: https://issues.apache.org/jira/browse/HIVE-13671 > Project: Hive > Issue Type: Bug > Components: Logging >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13671.1.patch > > > To enable perflogging, root logging has to be set to DEBUG. Provide a way to > to independently configure perflogger and root logger levels. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13671) Add PerfLogger to log4j2.properties logger
[ https://issues.apache.org/jira/browse/HIVE-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267663#comment-15267663 ] Prasanth Jayachandran commented on HIVE-13671: -- The key's specified via -hiveconf are added as system property directly. There are no hive config equivalent to those properties. We don't have to add this to HiveConf. > Add PerfLogger to log4j2.properties logger > -- > > Key: HIVE-13671 > URL: https://issues.apache.org/jira/browse/HIVE-13671 > Project: Hive > Issue Type: Bug > Components: Logging >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13671.1.patch > > > To enable perflogging, root logging has to be set to DEBUG. Provide a way to > to independently configure perflogger and root logger levels. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13442) LLAP: refactor submit API to be amenable to signing
[ https://issues.apache.org/jira/browse/HIVE-13442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267649#comment-15267649 ] Sergey Shelukhin commented on HIVE-13442: - {quote}After HIVE-13391 - we should stop sending any credentials from HiveServer2. That would be a separate jira. If HS2 is sending over any credentials - those should not be visible to the user. This would typically include the hive token - and gives the client access to whatever they want to read.{quote} This is the LLAP API - this is between the client and LLAP, HS2 is not involved in this part. {quote} I don't think we need to allow users to send in credentials. If we do - it would be better to separate credentials which are setup by HS2 for LLAP into a separate field which will be signed. A new field can be used for user specified credentials. External clients will need access to a token to talk to LLAP - so that would have to be sent over in a readable field.{quote} Hmm... these are the user credentials like HDFS tokens. So this is already what is done. > LLAP: refactor submit API to be amenable to signing > --- > > Key: HIVE-13442 > URL: https://issues.apache.org/jira/browse/HIVE-13442 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13442.nogen.patch, HIVE-13442.patch, > HIVE-13442.patch, HIVE-13442.protobuf.patch > > > This is going to be a wire compat breaking change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11550) ACID queries pollute HiveConf
[ https://issues.apache.org/jira/browse/HIVE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267648#comment-15267648 ] Eugene Koifman commented on HIVE-11550: --- yes they are intentional. Since SemanticAnalyzer.checkAcidConstraints() is no longer modifying HiveConf (note the /**/ which may not be clear in the diffs), we need to make sure that the same optimizers are enabled/disabled on a per (acid)query basis. > ACID queries pollute HiveConf > - > > Key: HIVE-11550 > URL: https://issues.apache.org/jira/browse/HIVE-11550 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-11550.patch > > > HiveConf is a SessionState level object. Some ACID related logic makes > changes to it (which are meant to be per query) but become per SessionState. > See SemanticAnalyzer.checkAcidConstraints() > Also note HiveConf.setVar(conf, > HiveConf.ConfVars.DYNAMICPARTITIONINGMODE, "nonstrict"); > in UpdateDeleteSemancitAnalzyer > [~alangates], do you know of other cases or ideas on how to deal with this > differently? > _SortedDynPartitionOptimizer.process()_ is the place to have the logic to do > _conf.setBoolVar(ConfVars.HIVEOPTSORTDYNAMICPARTITION, false);_ on per query > basis -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13661) [Refactor] Move common FS operations out of shim layer
[ https://issues.apache.org/jira/browse/HIVE-13661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267644#comment-15267644 ] Hive QA commented on HIVE-13661: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12801626/HIVE-13361.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 40 failed/errored test(s), 9989 tests executed *Failed tests:* {noformat} TestHWISessionManager - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorWithinDagPriority org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote.org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefault org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefaultEmptyString org.apache.hadoop.hive.metastore.TestHiveMetaStoreWithEnvironmentContext.testEnvironmentContext org.apache.hadoop.hive.metastore.TestMetaStoreAuthorization.testMetaStoreAuthorization org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testRevokeTimedOutWorkers org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.concurrencyFalse org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testLockTimeout org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testUpdate org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccess org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccessWithReadOnly org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropTable org.apache.hive.hcatalog.listener.TestDbNotificationListener.filter org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS {noformat} Test results: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/154/testReport Console output: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/154/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-154/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 40 tests failed {noformat} This message is
[jira] [Commented] (HIVE-13671) Add PerfLogger to log4j2.properties logger
[ https://issues.apache.org/jira/browse/HIVE-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267640#comment-15267640 ] Sergey Shelukhin commented on HIVE-13671: - +1 Does it need to be added to HiveConf? hive will complain about incorrect property otherwise, I suspect > Add PerfLogger to log4j2.properties logger > -- > > Key: HIVE-13671 > URL: https://issues.apache.org/jira/browse/HIVE-13671 > Project: Hive > Issue Type: Bug > Components: Logging >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13671.1.patch > > > To enable perflogging, root logging has to be set to DEBUG. Provide a way to > to independently configure perflogger and root logger levels. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13351) Support drop Primary Key/Foreign Key constraints
[ https://issues.apache.org/jira/browse/HIVE-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13351: - Attachment: HIVE-13351.1.patch > Support drop Primary Key/Foreign Key constraints > > > Key: HIVE-13351 > URL: https://issues.apache.org/jira/browse/HIVE-13351 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13351.1.patch > > > ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRAINTNAME; > The CONSTRAINTNAME has to be associated with the mentioned table, i.e. there > should be atleast 1 table column of TABLENAME participating in the constraint. > Otherwise, we should throw an error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13351) Support drop Primary Key/Foreign Key constraints
[ https://issues.apache.org/jira/browse/HIVE-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13351: - Attachment: (was: HIVE-13351.1.patch) > Support drop Primary Key/Foreign Key constraints > > > Key: HIVE-13351 > URL: https://issues.apache.org/jira/browse/HIVE-13351 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13351.1.patch > > > ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRAINTNAME; > The CONSTRAINTNAME has to be associated with the mentioned table, i.e. there > should be atleast 1 table column of TABLENAME participating in the constraint. > Otherwise, we should throw an error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11550) ACID queries pollute HiveConf
[ https://issues.apache.org/jira/browse/HIVE-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267627#comment-15267627 ] Alan Gates commented on HIVE-11550: --- Are the changes in Optimizer.java and AbstractCorrelationProcCtx.java related or did they get into the patch by accident? If they are there intentionally it's not clear how they are related, so some comments would be helpful. > ACID queries pollute HiveConf > - > > Key: HIVE-11550 > URL: https://issues.apache.org/jira/browse/HIVE-11550 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-11550.patch > > > HiveConf is a SessionState level object. Some ACID related logic makes > changes to it (which are meant to be per query) but become per SessionState. > See SemanticAnalyzer.checkAcidConstraints() > Also note HiveConf.setVar(conf, > HiveConf.ConfVars.DYNAMICPARTITIONINGMODE, "nonstrict"); > in UpdateDeleteSemancitAnalzyer > [~alangates], do you know of other cases or ideas on how to deal with this > differently? > _SortedDynPartitionOptimizer.process()_ is the place to have the logic to do > _conf.setBoolVar(ConfVars.HIVEOPTSORTDYNAMICPARTITION, false);_ on per query > basis -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13213) make DbLockManger work for non-acid resources
[ https://issues.apache.org/jira/browse/HIVE-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267618#comment-15267618 ] Alan Gates commented on HIVE-13213: --- +1 > make DbLockManger work for non-acid resources > - > > Key: HIVE-13213 > URL: https://issues.apache.org/jira/browse/HIVE-13213 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-13213.patch > > > for example, > insert into T values(...) > if T is an ACID table we acquire Read lock > but for non-acid table it should acquire Exclusive lock -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13351) Support drop Primary Key/Foreign Key constraints
[ https://issues.apache.org/jira/browse/HIVE-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13351: - Status: Patch Available (was: Open) > Support drop Primary Key/Foreign Key constraints > > > Key: HIVE-13351 > URL: https://issues.apache.org/jira/browse/HIVE-13351 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13351.1.patch > > > ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRAINTNAME; > The CONSTRAINTNAME has to be associated with the mentioned table, i.e. there > should be atleast 1 table column of TABLENAME participating in the constraint. > Otherwise, we should throw an error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13351) Support drop Primary Key/Foreign Key constraints
[ https://issues.apache.org/jira/browse/HIVE-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13351: - Attachment: HIVE-13351.1.patch > Support drop Primary Key/Foreign Key constraints > > > Key: HIVE-13351 > URL: https://issues.apache.org/jira/browse/HIVE-13351 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13351.1.patch > > > ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRAINTNAME; > The CONSTRAINTNAME has to be associated with the mentioned table, i.e. there > should be atleast 1 table column of TABLENAME participating in the constraint. > Otherwise, we should throw an error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13671) Add PerfLogger to log4j2.properties logger
[ https://issues.apache.org/jira/browse/HIVE-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13671: - Status: Patch Available (was: Open) > Add PerfLogger to log4j2.properties logger > -- > > Key: HIVE-13671 > URL: https://issues.apache.org/jira/browse/HIVE-13671 > Project: Hive > Issue Type: Bug > Components: Logging >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13671.1.patch > > > To enable perflogging, root logging has to be set to DEBUG. Provide a way to > to independently configure perflogger and root logger levels. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13671) Add PerfLogger to log4j2.properties logger
[ https://issues.apache.org/jira/browse/HIVE-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13671: - Attachment: HIVE-13671.1.patch [~sershe] Could you review this patch? Small changes to set perflogger level via -hiveconf or via log4j2.properties file. > Add PerfLogger to log4j2.properties logger > -- > > Key: HIVE-13671 > URL: https://issues.apache.org/jira/browse/HIVE-13671 > Project: Hive > Issue Type: Bug > Components: Logging >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13671.1.patch > > > To enable perflogging, root logging has to be set to DEBUG. Provide a way to > to independently configure perflogger and root logger levels. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13632) Hive failing on insert empty array into parquet table
[ https://issues.apache.org/jira/browse/HIVE-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-13632: Attachment: HIVE-13632.2.patch [~spena], patch 2 include your fix in read parquet tables. Also tested all the 47 q files with parquet in the file name, all pass: Tests run: 47, Failures: 0, Errors: 0, Skipped: 0 > Hive failing on insert empty array into parquet table > - > > Key: HIVE-13632 > URL: https://issues.apache.org/jira/browse/HIVE-13632 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers >Affects Versions: 1.1.0 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen > Attachments: HIVE-13632.1.patch, HIVE-13632.2.patch > > > The insert will fail with following stack: > {noformat} > by: parquet.io.ParquetEncodingException: empty fields are illegal, the field > should be ommited completely instead > at > parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:271) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$ListDataWriter.write(DataWritableWriter.java:271) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.write(DataWritableWriter.java:199) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$MessageDataWriter.write(DataWritableWriter.java:215) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:88) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59) > at > org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31) > at > parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:116) > at > parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123) > at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42) > at > org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111) > at > org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:697) > {noformat} > Reproduce: > {noformat} > create table test_small ( > key string, > arrayValues array) > stored as parquet; > insert into table test_small select 'abcd', array() from src limit 1; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11887) some tests break the build on a shared machine, can break HiveQA
[ https://issues.apache.org/jira/browse/HIVE-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-11887: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Thanks for the review! > some tests break the build on a shared machine, can break HiveQA > > > Key: HIVE-11887 > URL: https://issues.apache.org/jira/browse/HIVE-11887 > Project: Hive > Issue Type: Test > Components: Test >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.1.0 > > Attachments: HIVE-11887.01.patch, HIVE-11887.02.patch, > HIVE-11887.patch > > > Spark download creates UDFExampleAdd jar in /tmp; when building on a shared > machine, someone else's jar from a build prevents this jar from being created > (I have no permissions to this file because it was created by a different > user) and the build fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11887) some tests break the build on a shared machine, can break HiveQA
[ https://issues.apache.org/jira/browse/HIVE-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267606#comment-15267606 ] Sergey Shelukhin commented on HIVE-11887: - I don't even know which tests these were anymore, nothing failed due to this removal and I cannot find references to this jar in tmp upon a quick look. Committed to master. > some tests break the build on a shared machine, can break HiveQA > > > Key: HIVE-11887 > URL: https://issues.apache.org/jira/browse/HIVE-11887 > Project: Hive > Issue Type: Test > Components: Test >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-11887.01.patch, HIVE-11887.02.patch, > HIVE-11887.patch > > > Spark download creates UDFExampleAdd jar in /tmp; when building on a shared > machine, someone else's jar from a build prevents this jar from being created > (I have no permissions to this file because it was created by a different > user) and the build fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11887) some tests break the build on a shared machine, can break HiveQA
[ https://issues.apache.org/jira/browse/HIVE-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-11887: Component/s: Test > some tests break the build on a shared machine, can break HiveQA > > > Key: HIVE-11887 > URL: https://issues.apache.org/jira/browse/HIVE-11887 > Project: Hive > Issue Type: Bug > Components: Test >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-11887.01.patch, HIVE-11887.02.patch, > HIVE-11887.patch > > > Spark download creates UDFExampleAdd jar in /tmp; when building on a shared > machine, someone else's jar from a build prevents this jar from being created > (I have no permissions to this file because it was created by a different > user) and the build fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11887) some tests break the build on a shared machine, can break HiveQA
[ https://issues.apache.org/jira/browse/HIVE-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-11887: Issue Type: Test (was: Bug) > some tests break the build on a shared machine, can break HiveQA > > > Key: HIVE-11887 > URL: https://issues.apache.org/jira/browse/HIVE-11887 > Project: Hive > Issue Type: Test > Components: Test >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-11887.01.patch, HIVE-11887.02.patch, > HIVE-11887.patch > > > Spark download creates UDFExampleAdd jar in /tmp; when building on a shared > machine, someone else's jar from a build prevents this jar from being created > (I have no permissions to this file because it was created by a different > user) and the build fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13351) Support drop Primary Key/Foreign Key constraints
[ https://issues.apache.org/jira/browse/HIVE-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13351: - Description: ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRAINTNAME; The CONSTRAINTNAME has to be associated with the mentioned table, i.e. there should be atleast 1 table column of TABLENAME participating in the constraint. Otherwise, we should throw an error. was: ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRAINTNAME; The CONSTRAINTNAME has to be associated with the mentioned table. > Support drop Primary Key/Foreign Key constraints > > > Key: HIVE-13351 > URL: https://issues.apache.org/jira/browse/HIVE-13351 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > > ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRAINTNAME; > The CONSTRAINTNAME has to be associated with the mentioned table, i.e. there > should be atleast 1 table column of TABLENAME participating in the constraint. > Otherwise, we should throw an error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13672) Use loginUser from UGI to get llap user when generating LLAP splits.
[ https://issues.apache.org/jira/browse/HIVE-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere resolved HIVE-13672. --- Resolution: Fixed Fix Version/s: llap committed to llap branch > Use loginUser from UGI to get llap user when generating LLAP splits. > > > Key: HIVE-13672 > URL: https://issues.apache.org/jira/browse/HIVE-13672 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Fix For: llap > > Attachments: HIVE-13672.1.patch > > > HIVE-13389 used RegistryUtils.currentUser() to get the llap user name when > generating LLAP splits. However it looks like this will return the client > username, while we really want to get the hive/llap user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11887) some tests break the build on a shared machine, can break HiveQA
[ https://issues.apache.org/jira/browse/HIVE-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-11887: Summary: some tests break the build on a shared machine, can break HiveQA (was: spark tests break the build on a shared machine, can break HiveQA) > some tests break the build on a shared machine, can break HiveQA > > > Key: HIVE-11887 > URL: https://issues.apache.org/jira/browse/HIVE-11887 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-11887.01.patch, HIVE-11887.02.patch, > HIVE-11887.patch > > > Spark download creates UDFExampleAdd jar in /tmp; when building on a shared > machine, someone else's jar from a build prevents this jar from being created > (I have no permissions to this file because it was created by a different > user) and the build fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13445) LLAP: token should encode application and cluster ids
[ https://issues.apache.org/jira/browse/HIVE-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13445: Attachment: HIVE-13445.05.patch Rebasing, addressing RB feedback. > LLAP: token should encode application and cluster ids > - > > Key: HIVE-13445 > URL: https://issues.apache.org/jira/browse/HIVE-13445 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13445.01.patch, HIVE-13445.02.patch, > HIVE-13445.03.patch, HIVE-13445.04.patch, HIVE-13445.05.patch, > HIVE-13445.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13672) Use loginUser from UGI to get llap user when generating LLAP splits.
[ https://issues.apache.org/jira/browse/HIVE-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-13672: -- Attachment: HIVE-13672.1.patch > Use loginUser from UGI to get llap user when generating LLAP splits. > > > Key: HIVE-13672 > URL: https://issues.apache.org/jira/browse/HIVE-13672 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-13672.1.patch > > > HIVE-13389 used RegistryUtils.currentUser() to get the llap user name when > generating LLAP splits. However it looks like this will return the client > username, while we really want to get the hive/llap user. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13671) Add PerfLogger to log4j2.properties logger
[ https://issues.apache.org/jira/browse/HIVE-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13671: - Description: To enable perflogging, root logging has to be set to DEBUG. Provide a way to to independently configure perflogger and root logger levels. (was: 1) Add PerfLogger to log4j2.properties logger 2) Add llap cli log4j2 properties file to hive dist) > Add PerfLogger to log4j2.properties logger > -- > > Key: HIVE-13671 > URL: https://issues.apache.org/jira/browse/HIVE-13671 > Project: Hive > Issue Type: Bug > Components: Logging >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > > To enable perflogging, root logging has to be set to DEBUG. Provide a way to > to independently configure perflogger and root logger levels. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13671) Add PerfLogger to log4j2.properties logger
[ https://issues.apache.org/jira/browse/HIVE-13671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13671: - Summary: Add PerfLogger to log4j2.properties logger (was: Some log4j2 properties changes for perflogging and llap cli tool) > Add PerfLogger to log4j2.properties logger > -- > > Key: HIVE-13671 > URL: https://issues.apache.org/jira/browse/HIVE-13671 > Project: Hive > Issue Type: Bug > Components: Logging >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > > 1) Add PerfLogger to log4j2.properties logger > 2) Add llap cli log4j2 properties file to hive dist -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data
[ https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267573#comment-15267573 ] Sergey Shelukhin commented on HIVE-13391: - A different approach - clone the UGI. Solves both the token problem, and the undesirable FS object sharing. I filed HADOOP-13081 to add a proper API for this... I'll add reference to that in the TODO comment in the next iteration or on commit. > add an option to LLAP to use keytab to authenticate to read data > > > Key: HIVE-13391 > URL: https://issues.apache.org/jira/browse/HIVE-13391 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13391.01.patch, HIVE-13391.02.patch, > HIVE-13391.03.patch, HIVE-13391.04.patch, HIVE-13391.05.patch, > HIVE-13391.06.patch, HIVE-13391.07.patch, HIVE-13391.patch > > > This can be used for non-doAs case to allow access to clients who don't > propagate HDFS tokens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data
[ https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13391: Attachment: HIVE-13391.07.patch > add an option to LLAP to use keytab to authenticate to read data > > > Key: HIVE-13391 > URL: https://issues.apache.org/jira/browse/HIVE-13391 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13391.01.patch, HIVE-13391.02.patch, > HIVE-13391.03.patch, HIVE-13391.04.patch, HIVE-13391.05.patch, > HIVE-13391.06.patch, HIVE-13391.07.patch, HIVE-13391.patch > > > This can be used for non-doAs case to allow access to clients who don't > propagate HDFS tokens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13342) Improve logging in llap decider and throw exception in case llap mode is all but we cannot run in llap.
[ https://issues.apache.org/jira/browse/HIVE-13342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-13342: -- Attachment: HIVE-13342.5.patch The new logs captured which operator failed/passed the check. I modified it so that the log is printed only in case of failure. > Improve logging in llap decider and throw exception in case llap mode is all > but we cannot run in llap. > --- > > Key: HIVE-13342 > URL: https://issues.apache.org/jira/browse/HIVE-13342 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-13342.1.patch, HIVE-13342.2.patch, > HIVE-13342.3.patch, HIVE-13342.4.patch, HIVE-13342.5.patch > > > Currently we do not log our decisions with respect to llap. Are we running > everything in llap mode or only parts of the plan. We need more logging. > Also, if llap mode is all but for some reason, we cannot run the work in llap > mode, fail and throw an exception advise the user to change the mode to auto. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11880) filter bug of UNION ALL when hive.ppd.remove.duplicatefilters=true and filter condition is type incompatible column
[ https://issues.apache.org/jira/browse/HIVE-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-11880: Resolution: Duplicate Status: Resolved (was: Patch Available) Resolve this issue as dep of HIVE-13570. > filter bug of UNION ALL when hive.ppd.remove.duplicatefilters=true and > filter condition is type incompatible column > - > > Key: HIVE-11880 > URL: https://issues.apache.org/jira/browse/HIVE-11880 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 1.2.1 >Reporter: WangMeng >Assignee: WangMeng > Attachments: HIVE-11880.01.patch, HIVE-11880.02.patch, > HIVE-11880.03.patch, HIVE-11880.04.patch > > >For UNION ALL , when an union operator is constant column (such as '0L', > BIGINT Type) and its corresponding column has incompatible type (such as INT > type). > Query with filter condition on type incompatible column on this UNION ALL > will cause IndexOutOfBoundsException. > Such as TPC-H table "orders",in the following query: > Type of 'orders'.'o_custkey' is INT normally, while the type of > corresponding constant column "0" is BIGINT( `0L AS `o_custkey` ). > This following query (with filter "type incompatible column 'o_custkey' ") > will fail with java.lang.IndexOutOfBoundsException : > {code} > set hive.cbo.enable=false; > set hive.ppd.remove.duplicatefilters=true; > CREATE TABLE `orders`( > `o_orderkey` int, > `o_custkey` int, > `o_orderstatus` string, > `o_totalprice` double, > `o_orderdate` string, > `o_orderpriority` string, > `o_clerk` string, > `o_shippriority` int, > `o_comment` string); > SELECT o_orderkey > FROM ( > SELECT `o_orderkey` , > `o_custkey` > FROM `orders` > UNION ALL > SELECT `o_orderkey`, > 0L AS `o_custkey` > FROM `orders`) `oo` > WHERE o_custkey<10; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11880) filter bug of UNION ALL when hive.ppd.remove.duplicatefilters=true and filter condition is type incompatible column
[ https://issues.apache.org/jira/browse/HIVE-11880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267487#comment-15267487 ] Aihua Xu commented on HIVE-11880: - This issue seems to have been fixed by HIVE-13570. > filter bug of UNION ALL when hive.ppd.remove.duplicatefilters=true and > filter condition is type incompatible column > - > > Key: HIVE-11880 > URL: https://issues.apache.org/jira/browse/HIVE-11880 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 1.2.1 >Reporter: WangMeng >Assignee: WangMeng > Attachments: HIVE-11880.01.patch, HIVE-11880.02.patch, > HIVE-11880.03.patch, HIVE-11880.04.patch > > >For UNION ALL , when an union operator is constant column (such as '0L', > BIGINT Type) and its corresponding column has incompatible type (such as INT > type). > Query with filter condition on type incompatible column on this UNION ALL > will cause IndexOutOfBoundsException. > Such as TPC-H table "orders",in the following query: > Type of 'orders'.'o_custkey' is INT normally, while the type of > corresponding constant column "0" is BIGINT( `0L AS `o_custkey` ). > This following query (with filter "type incompatible column 'o_custkey' ") > will fail with java.lang.IndexOutOfBoundsException : > {code} > set hive.cbo.enable=false; > set hive.ppd.remove.duplicatefilters=true; > CREATE TABLE `orders`( > `o_orderkey` int, > `o_custkey` int, > `o_orderstatus` string, > `o_totalprice` double, > `o_orderdate` string, > `o_orderpriority` string, > `o_clerk` string, > `o_shippriority` int, > `o_comment` string); > SELECT o_orderkey > FROM ( > SELECT `o_orderkey` , > `o_custkey` > FROM `orders` > UNION ALL > SELECT `o_orderkey`, > 0L AS `o_custkey` > FROM `orders`) `oo` > WHERE o_custkey<10; > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13351) Support drop Primary Key/Foreign Key constraints
[ https://issues.apache.org/jira/browse/HIVE-13351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Sankar Sivarama Subramaniyan updated HIVE-13351: - Description: ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRAINTNAME; The CONSTRAINTNAME has to be associated with the mentioned table. > Support drop Primary Key/Foreign Key constraints > > > Key: HIVE-13351 > URL: https://issues.apache.org/jira/browse/HIVE-13351 > Project: Hive > Issue Type: Sub-task > Components: CBO, Logical Optimizer >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > > ALTER TABLE TABLENAME DROP CONSTRAINT CONSTRAINTNAME; > The CONSTRAINTNAME has to be associated with the mentioned table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13390: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master. Thanks [~sushanth] for the review. > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13390: Fix Version/s: 2.0.1 > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Fix For: 2.0.1 > > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13390: Affects Version/s: 1.2.1 2.0.0 > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Fix For: 2.0.1 > > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13235) Insert from select generates incorrect result when hive.optimize.constant.propagation is on
[ https://issues.apache.org/jira/browse/HIVE-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267452#comment-15267452 ] Ashutosh Chauhan commented on HIVE-13235: - [~pxiong] Is this same as HIVE-13602 ? > Insert from select generates incorrect result when > hive.optimize.constant.propagation is on > --- > > Key: HIVE-13235 > URL: https://issues.apache.org/jira/browse/HIVE-13235 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13235.1.patch, HIVE-13235.2.patch, > HIVE-13235.3.patch, HIVE-13235.4.patch > > > The following query returns incorrect result when constant optimization is > turned on. The subquery happens to have an alias p1 to be the same as the > input partition name. Constant optimizer will optimize it incorrectly as the > constant. > When constant optimizer is turned off, we will get the correct result. > {noformat} > set hive.cbo.enable=false; > set hive.optimize.constant.propagation = true; > create table t1(c1 string, c2 double) partitioned by (p1 string, p2 string); > create table t2(p1 double, c2 string); > insert into table t1 partition(p1='40', p2='p2') values('c1', 0.0); > INSERT OVERWRITE TABLE t2 select if((c2 = 0.0), c2, '0') as p1, 2 as p2 from > t1 where c1 = 'c1' and p1 = '40'; > select * from t2; > 40 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13343) Need to disable hybrid grace hash join in llap mode except for dynamically partitioned hash join
[ https://issues.apache.org/jira/browse/HIVE-13343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-13343: -- Attachment: HIVE-13343.5.patch > Need to disable hybrid grace hash join in llap mode except for dynamically > partitioned hash join > > > Key: HIVE-13343 > URL: https://issues.apache.org/jira/browse/HIVE-13343 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-13343.1.patch, HIVE-13343.2.patch, > HIVE-13343.3.patch, HIVE-13343.4.patch, HIVE-13343.5.patch > > > Due to performance reasons, we should disable use of hybrid grace hash join > in llap when dynamic partition hash join is not used. With dynamic partition > hash join, we need hybrid grace hash join due to the possibility of skews. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13620) Merge llap branch work to master
[ https://issues.apache.org/jira/browse/HIVE-13620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267439#comment-15267439 ] Hive QA commented on HIVE-13620: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12801623/HIVE-13620.4.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 53 failed/errored test(s), 9932 tests executed *Failed tests:* {noformat} TestHWISessionManager - did not produce a TEST-*.xml file TestMiniTezCliDriver-groupby2.q-tez_dynpart_hashjoin_1.q-custom_input_output_format.q-and-12-more - did not produce a TEST-*.xml file TestMiniTezCliDriver-vector_decimal_2.q-explainuser_1.q-explainuser_3.q-and-12-more - did not produce a TEST-*.xml file TestMiniTezCliDriver-vector_distinct_2.q-tez_joins_explain.q-cte_mat_1.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_show_functions org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_minimr_broken_pipe org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testFinishablePreeptsNonFinishable org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testPreemptionQueueComparator org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testWaitQueuePreemption org.apache.hadoop.hive.llap.daemon.impl.comparator.TestFirstInFirstOutComparator.testWaitQueueComparator org.apache.hadoop.hive.llap.daemon.impl.comparator.TestFirstInFirstOutComparator.testWaitQueueComparatorParallelism org.apache.hadoop.hive.llap.daemon.impl.comparator.TestFirstInFirstOutComparator.testWaitQueueComparatorWithinDagPriority org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparator org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorParallelism org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorWithinDagPriority org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote.org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefault org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfDefaultEmptyString org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfOverridden org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.testGetMetaConfUnknownPreperty org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping org.apache.hadoop.hive.metastore.TestHiveMetaStoreWithEnvironmentContext.testEnvironmentContext org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAddPartitionWithValidPartVal org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithUnicode org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler org.apache.hadoop.hive.metastore.hbase.TestHBaseImport.org.apache.hadoop.hive.metastore.hbase.TestHBaseImport org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
[jira] [Commented] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267415#comment-15267415 ] Vaibhav Gumashta commented on HIVE-13390: - Failures unrelated, committing. > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13235) Insert from select generates incorrect result when hive.optimize.constant.propagation is on
[ https://issues.apache.org/jira/browse/HIVE-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267414#comment-15267414 ] Aihua Xu commented on HIVE-13235: - Attached patch-4: for non-cbo case, we will keep track of the select column's original expression and use that rather than using the alias to match against another column info. We will not do that for cbo case since cbo has optimized AST tree and may not have the original expression. > Insert from select generates incorrect result when > hive.optimize.constant.propagation is on > --- > > Key: HIVE-13235 > URL: https://issues.apache.org/jira/browse/HIVE-13235 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13235.1.patch, HIVE-13235.2.patch, > HIVE-13235.3.patch, HIVE-13235.4.patch > > > The following query returns incorrect result when constant optimization is > turned on. The subquery happens to have an alias p1 to be the same as the > input partition name. Constant optimizer will optimize it incorrectly as the > constant. > When constant optimizer is turned off, we will get the correct result. > {noformat} > set hive.cbo.enable=false; > set hive.optimize.constant.propagation = true; > create table t1(c1 string, c2 double) partitioned by (p1 string, p2 string); > create table t2(p1 double, c2 string); > insert into table t1 partition(p1='40', p2='p2') values('c1', 0.0); > INSERT OVERWRITE TABLE t2 select if((c2 = 0.0), c2, '0') as p1, 2 as p2 from > t1 where c1 = 'c1' and p1 = '40'; > select * from t2; > 40 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13235) Insert from select generates incorrect result when hive.optimize.constant.propagation is on
[ https://issues.apache.org/jira/browse/HIVE-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13235: Attachment: HIVE-13235.4.patch > Insert from select generates incorrect result when > hive.optimize.constant.propagation is on > --- > > Key: HIVE-13235 > URL: https://issues.apache.org/jira/browse/HIVE-13235 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13235.1.patch, HIVE-13235.2.patch, > HIVE-13235.3.patch, HIVE-13235.4.patch > > > The following query returns incorrect result when constant optimization is > turned on. The subquery happens to have an alias p1 to be the same as the > input partition name. Constant optimizer will optimize it incorrectly as the > constant. > When constant optimizer is turned off, we will get the correct result. > {noformat} > set hive.cbo.enable=false; > set hive.optimize.constant.propagation = true; > create table t1(c1 string, c2 double) partitioned by (p1 string, p2 string); > create table t2(p1 double, c2 string); > insert into table t1 partition(p1='40', p2='p2') values('c1', 0.0); > INSERT OVERWRITE TABLE t2 select if((c2 = 0.0), c2, '0') as p1, 2 as p2 from > t1 where c1 = 'c1' and p1 = '40'; > select * from t2; > 40 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13653) improve config error messages for LLAP cache size/etc
[ https://issues.apache.org/jira/browse/HIVE-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267395#comment-15267395 ] Sergey Shelukhin commented on HIVE-13653: - Deriving IO memory is not a good idea as it depends on user preferences and executor size and count. Arena size is already derived. Min allocation/max allocation are assumed to be pretty much hardcoded, there's rarely reason to change them (min allocation apparently impacts perf a bit so maybe some people will lower it). The reason we saw issues is that people were setting cache size to small values, like 4Mb, the MIN_SIZE prevents that now, so the only way they can be in error is if someone configures them manually. > improve config error messages for LLAP cache size/etc > - > > Key: HIVE-13653 > URL: https://issues.apache.org/jira/browse/HIVE-13653 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13653.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13390) HiveServer2: Add more test to ZK service discovery using MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267382#comment-15267382 ] Vaibhav Gumashta commented on HIVE-13390: - Validating test failures. Most seem unrelated, running some locally. If they look unrelated, will commit this. > HiveServer2: Add more test to ZK service discovery using MiniHS2 > > > Key: HIVE-13390 > URL: https://issues.apache.org/jira/browse/HIVE-13390 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13390.1.patch, HIVE-13390.1.patch, > HIVE-13390.2.patch, HIVE-13390.3.patch, keystore.jks, > keystore_exampledotcom.jks, truststore.jks > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13653) improve config error messages for LLAP cache size/etc
[ https://issues.apache.org/jira/browse/HIVE-13653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267383#comment-15267383 ] Prasanth Jayachandran commented on HIVE-13653: -- nit: MIN_SIZE. minimum of what? arena or io memory? rename accordingly. nit: '(not recommended)' in exception msg. can we not suggest if we don't recommend it? Options confuse users :) Otherwise, +1 Orthogonal issue, can we make all these sizes derivable from 1 config? For example: If IO is enabled, derive the size from -Xmx in 'auto' mode. 10% Xmx = IO memory. 10% IO memory = min allocation, 40% IO memory = max allocation, Min(1GB, 25% IO memory) = Arena size etc. > improve config error messages for LLAP cache size/etc > - > > Key: HIVE-13653 > URL: https://issues.apache.org/jira/browse/HIVE-13653 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13653.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13645) Beeline needs null-guard around hiveVars and hiveConfVars read
[ https://issues.apache.org/jira/browse/HIVE-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267377#comment-15267377 ] Sushanth Sowmyan commented on HIVE-13645: - Thanks, pushed to branch-2.0 as well. > Beeline needs null-guard around hiveVars and hiveConfVars read > -- > > Key: HIVE-13645 > URL: https://issues.apache.org/jira/browse/HIVE-13645 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 2.1.0 >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan >Priority: Minor > Fix For: 1.3.0, 1.2.2, 2.1.0, 2.0.1 > > Attachments: HIVE-13645.patch > > > Beeline has a bug wherein if a user does a !save ever, then on next load, if > beeline.hiveVariables or beeline.hiveconfvariables are empty, i.e. \{\} or > unspecified, then it loads it as null, and then, on next connect, there is no > null-check on these variables leading to an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13645) Beeline needs null-guard around hiveVars and hiveConfVars read
[ https://issues.apache.org/jira/browse/HIVE-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushanth Sowmyan updated HIVE-13645: Fix Version/s: 2.0.1 > Beeline needs null-guard around hiveVars and hiveConfVars read > -- > > Key: HIVE-13645 > URL: https://issues.apache.org/jira/browse/HIVE-13645 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 2.1.0 >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan >Priority: Minor > Fix For: 1.3.0, 1.2.2, 2.1.0, 2.0.1 > > Attachments: HIVE-13645.patch > > > Beeline has a bug wherein if a user does a !save ever, then on next load, if > beeline.hiveVariables or beeline.hiveconfvariables are empty, i.e. \{\} or > unspecified, then it loads it as null, and then, on next connect, there is no > null-check on these variables leading to an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13645) Beeline needs null-guard around hiveVars and hiveConfVars read
[ https://issues.apache.org/jira/browse/HIVE-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267366#comment-15267366 ] Sergey Shelukhin commented on HIVE-13645: - it's open for commits since RC is blocked > Beeline needs null-guard around hiveVars and hiveConfVars read > -- > > Key: HIVE-13645 > URL: https://issues.apache.org/jira/browse/HIVE-13645 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 2.1.0 >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan >Priority: Minor > Fix For: 1.3.0, 1.2.2, 2.1.0 > > Attachments: HIVE-13645.patch > > > Beeline has a bug wherein if a user does a !save ever, then on next load, if > beeline.hiveVariables or beeline.hiveconfvariables are empty, i.e. \{\} or > unspecified, then it loads it as null, and then, on next connect, there is no > null-check on these variables leading to an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13670) Improve Beeline reconnect semantics
[ https://issues.apache.org/jira/browse/HIVE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushanth Sowmyan updated HIVE-13670: Status: Patch Available (was: Open) > Improve Beeline reconnect semantics > --- > > Key: HIVE-13670 > URL: https://issues.apache.org/jira/browse/HIVE-13670 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.1.0 >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan > Attachments: HIVE-13670.patch > > > For most users of beeline, chances are that they will be using it with a > single HS2 instance most of the time. In this scenario, having them type out > a jdbc uri for HS2 every single time to !connect can get tiresome. Thus, we > should improve semantics so that if a user does a successful !connect, then > we must store the last-connected-to-url, so that if they do a !close, and > then a !reconnect, then !reconnect should attempt to connect to the last > successfully used url. > Also, if they then do a !save, then that last-successfully-used url must be > saved, so that in subsequent sessions, they can simply do !reconnect rather > than specifying a url for !connect. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13670) Improve Beeline reconnect semantics
[ https://issues.apache.org/jira/browse/HIVE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushanth Sowmyan updated HIVE-13670: Attachment: HIVE-13670.patch Patch attached. Nothing fancy like adding commandline params/etc, simply extends the functionality of !reconect. > Improve Beeline reconnect semantics > --- > > Key: HIVE-13670 > URL: https://issues.apache.org/jira/browse/HIVE-13670 > Project: Hive > Issue Type: Improvement >Affects Versions: 2.1.0 >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan > Attachments: HIVE-13670.patch > > > For most users of beeline, chances are that they will be using it with a > single HS2 instance most of the time. In this scenario, having them type out > a jdbc uri for HS2 every single time to !connect can get tiresome. Thus, we > should improve semantics so that if a user does a successful !connect, then > we must store the last-connected-to-url, so that if they do a !close, and > then a !reconnect, then !reconnect should attempt to connect to the last > successfully used url. > Also, if they then do a !save, then that last-successfully-used url must be > saved, so that in subsequent sessions, they can simply do !reconnect rather > than specifying a url for !connect. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13645) Beeline needs null-guard around hiveVars and hiveConfVars read
[ https://issues.apache.org/jira/browse/HIVE-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267301#comment-15267301 ] Sushanth Sowmyan commented on HIVE-13645: - [~sershe], Is branch-2.0 okay for commits right now, or should we hold off? > Beeline needs null-guard around hiveVars and hiveConfVars read > -- > > Key: HIVE-13645 > URL: https://issues.apache.org/jira/browse/HIVE-13645 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 2.1.0 >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan >Priority: Minor > Fix For: 1.3.0, 1.2.2, 2.1.0 > > Attachments: HIVE-13645.patch > > > Beeline has a bug wherein if a user does a !save ever, then on next load, if > beeline.hiveVariables or beeline.hiveconfvariables are empty, i.e. \{\} or > unspecified, then it loads it as null, and then, on next connect, there is no > null-check on these variables leading to an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13645) Beeline needs null-guard around hiveVars and hiveConfVars read
[ https://issues.apache.org/jira/browse/HIVE-13645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sushanth Sowmyan updated HIVE-13645: Resolution: Fixed Fix Version/s: 2.1.0 1.2.2 1.3.0 Status: Resolved (was: Patch Available) > Beeline needs null-guard around hiveVars and hiveConfVars read > -- > > Key: HIVE-13645 > URL: https://issues.apache.org/jira/browse/HIVE-13645 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 2.1.0 >Reporter: Sushanth Sowmyan >Assignee: Sushanth Sowmyan >Priority: Minor > Fix For: 1.3.0, 1.2.2, 2.1.0 > > Attachments: HIVE-13645.patch > > > Beeline has a bug wherein if a user does a !save ever, then on next load, if > beeline.hiveVariables or beeline.hiveconfvariables are empty, i.e. \{\} or > unspecified, then it loads it as null, and then, on next connect, there is no > null-check on these variables leading to an NPE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)