[jira] [Commented] (HIVE-12634) Add command to kill an ACID transacton
[ https://issues.apache.org/jira/browse/HIVE-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229713#comment-15229713 ] Hive QA commented on HIVE-12634: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797206/HIVE-12634.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 9979 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket5 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket6 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_disable_merge_for_bucketing org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join3 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join4 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5 org.apache.hadoop.hive.metastore.TestMetaStoreEventListener.testListener org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testSimpleTable org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges org.apache.hive.service.cli.session.TestSessionManagerMetrics.testThreadPoolMetrics {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7495/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7495/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7495/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 23 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12797206 - PreCommit-HIVE-TRUNK-Build > Add command to kill an ACID transacton > -- > > Key: HIVE-12634 > URL: https://issues.apache.org/jira/browse/HIVE-12634 > Project: Hive > Issue Type: New Feature > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Wei Zheng > Attachments: HIVE-12634.1.patch > > > Should add a CLI command to abort a (runaway) transaction. > This should clean up all state related to this txn. > The initiator of this (if still alive) will get an error trying to > heartbeat/commit, i.e. will become aware that the txn is dead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12159) Create vectorized readers for the complex types
[ https://issues.apache.org/jira/browse/HIVE-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229701#comment-15229701 ] Matt McCline commented on HIVE-12159: - +1 LGTM tests pending > Create vectorized readers for the complex types > --- > > Key: HIVE-12159 > URL: https://issues.apache.org/jira/browse/HIVE-12159 > Project: Hive > Issue Type: Sub-task >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HIVE-12159.patch, HIVE-12159.patch > > > We need vectorized readers for the complex types. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13268) Add a HA mini cluster type in MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229674#comment-15229674 ] Takanobu Asanuma commented on HIVE-13268: - Thank you very much for reviewing and committing! > Add a HA mini cluster type in MiniHS2 > - > > Key: HIVE-13268 > URL: https://issues.apache.org/jira/browse/HIVE-13268 > Project: Hive > Issue Type: Test > Components: Tests >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Minor > Fix For: 2.1.0 > > Attachments: HIVE-13268.1.patch, HIVE-13268.2.patch, > HIVE-13268.3.patch, HIVE-13268.4.patch, HIVE-13268.5.patch > > > We need a HA mini cluster for unit tests. This jira is for implimenting that > in MiniHS2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC
[ https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-9660: --- Attachment: HIVE-9660.06.patch > store end offset of compressed data for RG in RowIndex in ORC > - > > Key: HIVE-9660 > URL: https://issues.apache.org/jira/browse/HIVE-9660 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-9660.01.patch, HIVE-9660.02.patch, > HIVE-9660.03.patch, HIVE-9660.04.patch, HIVE-9660.05.patch, > HIVE-9660.06.patch, HIVE-9660.patch, HIVE-9660.patch > > > Right now the end offset is estimated, which in some cases results in tons of > extra data being read. > We can add a separate array to RowIndex (positions_v2?) that stores number of > compressed buffers for each RG, or end offset, or something, to remove this > estimation magic -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12159) Create vectorized readers for the complex types
[ https://issues.apache.org/jira/browse/HIVE-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated HIVE-12159: - Attachment: HIVE-12159.patch This patch addresses the qfile test failures and Matt's review comments: * replaced long batchSize with int. * deleted confusing TODO comment * renamed and commented the TreeReader.nextBatch method. * fixed the DecimalTreeReader The github pull request has the changes as a separate commit, so you can isolate the new changes. > Create vectorized readers for the complex types > --- > > Key: HIVE-12159 > URL: https://issues.apache.org/jira/browse/HIVE-12159 > Project: Hive > Issue Type: Sub-task >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HIVE-12159.patch, HIVE-12159.patch > > > We need vectorized readers for the complex types. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-13452) StatsOptimizer should return no rows on empty table with group by
[ https://issues.apache.org/jira/browse/HIVE-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong reassigned HIVE-13452: -- Assignee: Pengcheng Xiong > StatsOptimizer should return no rows on empty table with group by > - > > Key: HIVE-13452 > URL: https://issues.apache.org/jira/browse/HIVE-13452 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Reporter: Ashutosh Chauhan >Assignee: Pengcheng Xiong > > {code} > create table t1 (a int); > analyze table t1 compute statistics; > analyze table t1 compute statistics for columns; > select count(1) from t1 group by 1; > set hive.compute.query.using.stats=true; > select count(1) from t1 group by 1; > {code} > In both cases result set should be empty. However, with statsoptimizer on > Hive returns one row with value 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13438) Add a service check script for llap
[ https://issues.apache.org/jira/browse/HIVE-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229615#comment-15229615 ] Gunther Hagleitner commented on HIVE-13438: --- [~vikram.dixit] you've used a var for the table name, but the select statement is against a table "students". Other than that +1. > Add a service check script for llap > --- > > Key: HIVE-13438 > URL: https://issues.apache.org/jira/browse/HIVE-13438 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-13438.1.patch > > > We want to have a test script that can be run by an installer such as ambari > that makes sure that the service is up and running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13360) Refactoring Hive Authorization
[ https://issues.apache.org/jira/browse/HIVE-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229612#comment-15229612 ] Hive QA commented on HIVE-13360: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797170/HIVE-13360.04.patch {color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 9928 tests executed *Failed tests:* {noformat} TestMiniTezCliDriver-tez_smb_empty.q-mapjoin_decimal.q-transform_ppr2.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityPreemption org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testSimpleLocalAllocation org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote.org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithUnicode org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler org.apache.hadoop.hive.ql.parse.authorization.TestSessionUserName.testSessionUserIpAddress org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropTable org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccess org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropTable org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS org.apache.hive.spark.client.TestSparkClient.testSyncRpc {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7494/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7494/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7494/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 37 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12797170 - PreCommit-HIVE-TRUNK-Build > Refactoring Hive Authorization > -
[jira] [Updated] (HIVE-13320) Apply HIVE-11544 to explicit conversions as well as implicit ones
[ https://issues.apache.org/jira/browse/HIVE-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nita Dembla updated HIVE-13320: --- Attachment: HIVE-13320.2.patch > Apply HIVE-11544 to explicit conversions as well as implicit ones > - > > Key: HIVE-13320 > URL: https://issues.apache.org/jira/browse/HIVE-13320 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 1.3.0, 1.2.1, 2.0.0, 2.1.0 >Reporter: Gopal V >Assignee: Nita Dembla > Attachments: HIVE-13320.1.patch, HIVE-13320.2.patch > > > Parsing 1 million blank values through cast(x as int) is 3x slower than > parsing a valid single digit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13320) Apply HIVE-11544 to explicit conversions as well as implicit ones
[ https://issues.apache.org/jira/browse/HIVE-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nita Dembla updated HIVE-13320: --- Status: Patch Available (was: Open) > Apply HIVE-11544 to explicit conversions as well as implicit ones > - > > Key: HIVE-13320 > URL: https://issues.apache.org/jira/browse/HIVE-13320 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.0.0, 1.2.1, 1.3.0, 2.1.0 >Reporter: Gopal V >Assignee: Nita Dembla > Attachments: HIVE-13320.1.patch, HIVE-13320.2.patch > > > Parsing 1 million blank values through cast(x as int) is 3x slower than > parsing a valid single digit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13320) Apply HIVE-11544 to explicit conversions as well as implicit ones
[ https://issues.apache.org/jira/browse/HIVE-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nita Dembla updated HIVE-13320: --- Status: Open (was: Patch Available) > Apply HIVE-11544 to explicit conversions as well as implicit ones > - > > Key: HIVE-13320 > URL: https://issues.apache.org/jira/browse/HIVE-13320 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.0.0, 1.2.1, 1.3.0, 2.1.0 >Reporter: Gopal V >Assignee: Nita Dembla > Attachments: HIVE-13320.1.patch > > > Parsing 1 million blank values through cast(x as int) is 3x slower than > parsing a valid single digit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13409) Fix JDK8 test failures related to COLUMN_STATS_ACCURATE
[ https://issues.apache.org/jira/browse/HIVE-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229554#comment-15229554 ] Mohit Sabharwal commented on HIVE-13409: Actually, this is more involved as JSONObject is using HashMap internally. StatsSetupConst::setColumnStatsState() {code} stats = new JSONObject(statsAcc); {code} I don't see a constructor that takes both a user constructed LinkedHashMap and a json string, so this needs a java 8 specific golden file update :( > Fix JDK8 test failures related to COLUMN_STATS_ACCURATE > --- > > Key: HIVE-13409 > URL: https://issues.apache.org/jira/browse/HIVE-13409 > Project: Hive > Issue Type: Bug > Components: Tests >Reporter: Mohit Sabharwal >Assignee: Mohit Sabharwal > > 126 failures have crept into JDK8 tests since we resolved HIVE-8607 > http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-TRUNK-JAVA8/ > Majority relate to the ordering of a "COLUMN_STATS_ACCURATE" partition > property. > Looks like a simple fix, use ordered map in > HiveStringUtils.getPropertiesExplain() -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13452) StatsOptimizer should return no rows on empty table with group by
[ https://issues.apache.org/jira/browse/HIVE-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229534#comment-15229534 ] Ashutosh Chauhan commented on HIVE-13452: - [~pxiong] Can you take a look? > StatsOptimizer should return no rows on empty table with group by > - > > Key: HIVE-13452 > URL: https://issues.apache.org/jira/browse/HIVE-13452 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Reporter: Ashutosh Chauhan > > {code} > create table t1 (a int); > analyze table t1 compute statistics; > analyze table t1 compute statistics for columns; > select count(1) from t1 group by 1; > set hive.compute.query.using.stats=true; > select count(1) from t1 group by 1; > {code} > In both cases result set should be empty. However, with statsoptimizer on > Hive returns one row with value 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data
[ https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13391: Attachment: HIVE-13391.03.patch Updated the patch. Frankly, even though there's no clear case where it would make a difference, I don't like extending the scope of the keytab over the entire task from just the reader (IO elevator part could be removed anyway). I renamed the configs to indicate the scope change. > add an option to LLAP to use keytab to authenticate to read data > > > Key: HIVE-13391 > URL: https://issues.apache.org/jira/browse/HIVE-13391 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13391.01.patch, HIVE-13391.02.patch, > HIVE-13391.03.patch, HIVE-13391.patch > > > This can be used for non-doAs case to allow access to clients who don't > propagate HDFS tokens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13268) Add a HA mini cluster type in MiniHS2
[ https://issues.apache.org/jira/browse/HIVE-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13268: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Committed to master. Thank you for the patch! > Add a HA mini cluster type in MiniHS2 > - > > Key: HIVE-13268 > URL: https://issues.apache.org/jira/browse/HIVE-13268 > Project: Hive > Issue Type: Test > Components: Tests >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Minor > Fix For: 2.1.0 > > Attachments: HIVE-13268.1.patch, HIVE-13268.2.patch, > HIVE-13268.3.patch, HIVE-13268.4.patch, HIVE-13268.5.patch > > > We need a HA mini cluster for unit tests. This jira is for implimenting that > in MiniHS2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12244) Refactoring code for avoiding of comparison of Strings and do comparison on Path
[ https://issues.apache.org/jira/browse/HIVE-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229479#comment-15229479 ] Ashutosh Chauhan commented on HIVE-12244: - After applying your patch and running -Dqfile_regex=skewjoin_mapjoin.* I see plan changes for all those queries. > Refactoring code for avoiding of comparison of Strings and do comparison on > Path > > > Key: HIVE-12244 > URL: https://issues.apache.org/jira/browse/HIVE-12244 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 0.13.0, 0.14.0, 1.0.0, 1.2.1 >Reporter: Alina Abramova >Assignee: Alina Abramova >Priority: Minor > Labels: patch > Fix For: 1.2.1 > > Attachments: HIVE-12244.1.patch, HIVE-12244.2.patch, > HIVE-12244.3.patch, HIVE-12244.4.patch, HIVE-12244.5.patch, > HIVE-12244.6.patch, HIVE-12244.7.patch, HIVE-12244.8.patch, > HIVE-12244.8.patch, HIVE-12244.9.patch > > > In Hive often String is used for representation path and it causes new issues. > We need to compare it with equals() but comparing Strings often is not right > in terms comparing paths . > I think if we use Path from org.apache.hadoop.fs we will avoid new problems > in future. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11351) Column Found in more than One Tables/Subqueries
[ https://issues.apache.org/jira/browse/HIVE-11351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229468#comment-15229468 ] Ashutosh Chauhan commented on HIVE-11351: - This might have same issue as HIVE-13235 cc: [~aihuaxu] > Column Found in more than One Tables/Subqueries > --- > > Key: HIVE-11351 > URL: https://issues.apache.org/jira/browse/HIVE-11351 > Project: Hive > Issue Type: Bug > Environment: HIVE 1.1.0 >Reporter: MK >Assignee: Alina Abramova > Attachments: HIVE-11351-branch-1.0.patch > > > when execute a script: > INSERT overwrite TABLE tmp.tmp_dim_cpttr_categ1 >SELECT DISTINCT cur.categ_id AS categ_id, >cur.categ_code AS categ_code, >cur.categ_name AS categ_name, >cur.categ_parnt_id AS categ_parnt_id, >par.categ_name AS categ_parnt_name, >cur.mc_site_id AS mc_site_id >FROM tmp.tmp_dim_cpttr_categ cur >LEFT OUTER JOIN tmp.tmp_dim_cpttr_categ par >ON cur.categ_parnt_id = par.categ_id; > error occur : SemanticException Column categ_name Found in more than One > Tables/Subqueries > when modify the alias categ_name to categ_name_cur, it will be execute > successfully. > INSERT overwrite TABLE tmp.tmp_dim_cpttr_categ1 >SELECT DISTINCT cur.categ_id AS categ_id, >cur.categ_code AS categ_code, >cur.categ_name AS categ_name_cur, >cur.categ_parnt_id AS categ_parnt_id, >par.categ_name AS categ_parnt_name, >cur.mc_site_id AS mc_site_id >FROM tmp.tmp_dim_cpttr_categ cur >LEFT OUTER JOIN tmp.tmp_dim_cpttr_categ par >ON cur.categ_parnt_id = par.categ_id; > this happen when we upgrade hive from 0.10 to 1.1.0 . -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13235) Insert from select generates incorrect result when hive.optimize.constant.propagation is on
[ https://issues.apache.org/jira/browse/HIVE-13235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229467#comment-15229467 ] Ashutosh Chauhan commented on HIVE-13235: - [~aihuaxu] Can you create a RB for this ? > Insert from select generates incorrect result when > hive.optimize.constant.propagation is on > --- > > Key: HIVE-13235 > URL: https://issues.apache.org/jira/browse/HIVE-13235 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13235.1.patch, HIVE-13235.2.patch, > HIVE-13235.3.patch > > > The following query returns incorrect result when constant optimization is > turned on. The subquery happens to have an alias p1 to be the same as the > input partition name. Constant optimizer will optimize it incorrectly as the > constant. > When constant optimizer is turned off, we will get the correct result. > {noformat} > set hive.cbo.enable=false; > set hive.optimize.constant.propagation = true; > create table t1(c1 string, c2 double) partitioned by (p1 string, p2 string); > create table t2(p1 double, c2 string); > insert into table t1 partition(p1='40', p2='p2') values('c1', 0.0); > INSERT OVERWRITE TABLE t2 select if((c2 = 0.0), c2, '0') as p1, 2 as p2 from > t1 where c1 = 'c1' and p1 = '40'; > select * from t2; > 40 2 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13413) add a llapstatus command line tool
[ https://issues.apache.org/jira/browse/HIVE-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229445#comment-15229445 ] Prasanth Jayachandran commented on HIVE-13413: -- Mostly looks good Some minor comments: 1) // TODO Add additional information such as #executors, container size, etc Create a follow-up? I guess at this point this tool is just used as health check/status of daemons. Per daemon configurations are obtained via JMX? 2) daemon webaddress/status page currently shows Error 404. Is that part of this jira or another? 3) populateAppStatusFromLlapRegistry(). do we need to create new Configuration object? reuse already created object? 4) llapExtraInstances.add(llapInstance); This line add nulls to the list right? I don't see it used anywhere other than logging. use boolean instead? 5) nit: remove deadcode. // String nmUrl = (String) containerParams.get("hostUrl"); 6) wow. Map>> :) > add a llapstatus command line tool > -- > > Key: HIVE-13413 > URL: https://issues.apache.org/jira/browse/HIVE-13413 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13413.01.patch, appComplete, invalidApp, > oneContainerDown, running, starting > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12968) genNotNullFilterForJoinSourcePlan: needs to merge predicates into the multi-AND
[ https://issues.apache.org/jira/browse/HIVE-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-12968: Status: Patch Available (was: Open) > genNotNullFilterForJoinSourcePlan: needs to merge predicates into the > multi-AND > --- > > Key: HIVE-12968 > URL: https://issues.apache.org/jira/browse/HIVE-12968 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Minor > Attachments: HIVE-12968.1.patch, HIVE-12968.2.patch, > HIVE-12968.3.patch, HIVE-12968.4.patch, HIVE-12968.5.patch, > HIVE-12968.6.patch, HIVE-12968.7.patch > > > {code} > predicate: ((cbigint is not null and cint is not null) and cint BETWEEN > 100 AND 300) (type: boolean) > {code} > does not fold the IS_NULL on cint, because of the structure of the AND clause. > For example, see {{tez_dynpart_hashjoin_1.q}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12968) genNotNullFilterForJoinSourcePlan: needs to merge predicates into the multi-AND
[ https://issues.apache.org/jira/browse/HIVE-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-12968: Status: Open (was: Patch Available) > genNotNullFilterForJoinSourcePlan: needs to merge predicates into the > multi-AND > --- > > Key: HIVE-12968 > URL: https://issues.apache.org/jira/browse/HIVE-12968 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Minor > Attachments: HIVE-12968.1.patch, HIVE-12968.2.patch, > HIVE-12968.3.patch, HIVE-12968.4.patch, HIVE-12968.5.patch, > HIVE-12968.6.patch, HIVE-12968.7.patch > > > {code} > predicate: ((cbigint is not null and cint is not null) and cint BETWEEN > 100 AND 300) (type: boolean) > {code} > does not fold the IS_NULL on cint, because of the structure of the AND clause. > For example, see {{tez_dynpart_hashjoin_1.q}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-12968) genNotNullFilterForJoinSourcePlan: needs to merge predicates into the multi-AND
[ https://issues.apache.org/jira/browse/HIVE-12968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-12968: Attachment: HIVE-12968.7.patch > genNotNullFilterForJoinSourcePlan: needs to merge predicates into the > multi-AND > --- > > Key: HIVE-12968 > URL: https://issues.apache.org/jira/browse/HIVE-12968 > Project: Hive > Issue Type: Improvement > Components: Logical Optimizer >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Minor > Attachments: HIVE-12968.1.patch, HIVE-12968.2.patch, > HIVE-12968.3.patch, HIVE-12968.4.patch, HIVE-12968.5.patch, > HIVE-12968.6.patch, HIVE-12968.7.patch > > > {code} > predicate: ((cbigint is not null and cint is not null) and cint BETWEEN > 100 AND 300) (type: boolean) > {code} > does not fold the IS_NULL on cint, because of the structure of the AND clause. > For example, see {{tez_dynpart_hashjoin_1.q}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13428) ZK SM in LLAP should have unique paths per cluster
[ https://issues.apache.org/jira/browse/HIVE-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229426#comment-15229426 ] Hive QA commented on HIVE-13428: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797168/HIVE-13428.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 9976 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityPreemption org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS org.apache.hive.spark.client.TestSparkClient.testAddJarsAndFiles org.apache.hive.spark.client.TestSparkClient.testCounters org.apache.hive.spark.client.TestSparkClient.testErrorJob org.apache.hive.spark.client.TestSparkClient.testJobSubmission org.apache.hive.spark.client.TestSparkClient.testMetricsCollection org.apache.hive.spark.client.TestSparkClient.testSimpleSparkJob org.apache.hive.spark.client.TestSparkClient.testSyncRpc {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7493/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7493/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7493/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12797168 - PreCommit-HIVE-TRUNK-Build > ZK SM in LLAP should have unique paths per cluster > -- > > Key: HIVE-13428 > URL: https://issues.apache.org/jira/browse/HIVE-13428 > Project: Hive > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Blocker > Attachments: HIVE-13428.patch > > > Noticed this while working on some other patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13440) remove hiveserver1 scripts under bin/ext/
[ https://issues.apache.org/jira/browse/HIVE-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229395#comment-15229395 ] Vaibhav Gumashta commented on HIVE-13440: - [~thejas] I do see service/src/gen/thrift/* > remove hiveserver1 scripts under bin/ext/ > - > > Key: HIVE-13440 > URL: https://issues.apache.org/jira/browse/HIVE-13440 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Thejas M Nair > Labels: newbie, trivial > > HIVE-6977 deleted hiveserver1, however the scripts remain under bin/ext/- > ls bin/ext/hiveserver.* > bin/ext/hiveserver.cmd bin/ext/hiveserver.sh > The should be removed as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13413) add a llapstatus command line tool
[ https://issues.apache.org/jira/browse/HIVE-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229385#comment-15229385 ] Prasanth Jayachandran commented on HIVE-13413: -- [~sseth] Can you please put the patch in RB for review? > add a llapstatus command line tool > -- > > Key: HIVE-13413 > URL: https://issues.apache.org/jira/browse/HIVE-13413 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13413.01.patch, appComplete, invalidApp, > oneContainerDown, running, starting > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13444) LLAP: add HMAC signatures to LLAPIF splits; verify them on LLAP side
[ https://issues.apache.org/jira/browse/HIVE-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13444: Summary: LLAP: add HMAC signatures to LLAPIF splits; verify them on LLAP side (was: LLAP: add HMAC signatures to LLAPIF splits) > LLAP: add HMAC signatures to LLAPIF splits; verify them on LLAP side > > > Key: HIVE-13444 > URL: https://issues.apache.org/jira/browse/HIVE-13444 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-13444) LLAP: add HMAC signatures to LLAPIF splits; verify them on LLAP side
[ https://issues.apache.org/jira/browse/HIVE-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-13444: --- Assignee: Sergey Shelukhin > LLAP: add HMAC signatures to LLAPIF splits; verify them on LLAP side > > > Key: HIVE-13444 > URL: https://issues.apache.org/jira/browse/HIVE-13444 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13444) LLAP: add HMAC signatures to LLAPIF splits; verify them on LLAP side
[ https://issues.apache.org/jira/browse/HIVE-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229339#comment-15229339 ] Sergey Shelukhin commented on HIVE-13444: - I have a partial patch for this; it's mainly blocked on the actual API stuff from the other subtasks. > LLAP: add HMAC signatures to LLAPIF splits; verify them on LLAP side > > > Key: HIVE-13444 > URL: https://issues.apache.org/jira/browse/HIVE-13444 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13443) LLAP: change JDBC LLAPIF splits to contain protobuf fragment specs (for two stages of submit)
[ https://issues.apache.org/jira/browse/HIVE-13443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13443: Summary: LLAP: change JDBC LLAPIF splits to contain protobuf fragment specs (for two stages of submit) (was: LLAP: change JDBC LLAPIF splits to contain signed protobuf ) > LLAP: change JDBC LLAPIF splits to contain protobuf fragment specs (for two > stages of submit) > -- > > Key: HIVE-13443 > URL: https://issues.apache.org/jira/browse/HIVE-13443 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC
[ https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229332#comment-15229332 ] Prasanth Jayachandran commented on HIVE-9660: - Posted some comments in RB. I will have to do another pass to better understand things in clear mind :). > store end offset of compressed data for RG in RowIndex in ORC > - > > Key: HIVE-9660 > URL: https://issues.apache.org/jira/browse/HIVE-9660 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-9660.01.patch, HIVE-9660.02.patch, > HIVE-9660.03.patch, HIVE-9660.04.patch, HIVE-9660.05.patch, HIVE-9660.patch, > HIVE-9660.patch > > > Right now the end offset is estimated, which in some cases results in tons of > extra data being read. > We can add a separate array to RowIndex (positions_v2?) that stores number of > compressed buffers for each RG, or end offset, or something, to remove this > estimation magic -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13440) remove hiveserver1 scripts under bin/ext/
[ https://issues.apache.org/jira/browse/HIVE-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229326#comment-15229326 ] Thejas M Nair commented on HIVE-13440: -- any generated files still there ? > remove hiveserver1 scripts under bin/ext/ > - > > Key: HIVE-13440 > URL: https://issues.apache.org/jira/browse/HIVE-13440 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Thejas M Nair > Labels: newbie, trivial > > HIVE-6977 deleted hiveserver1, however the scripts remain under bin/ext/- > ls bin/ext/hiveserver.* > bin/ext/hiveserver.cmd bin/ext/hiveserver.sh > The should be removed as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13333) StatsOptimizer throws ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229311#comment-15229311 ] Ashutosh Chauhan commented on HIVE-1: - +1 > StatsOptimizer throws ClassCastException > > > Key: HIVE-1 > URL: https://issues.apache.org/jira/browse/HIVE-1 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 2.0.0 >Reporter: Ashutosh Chauhan >Assignee: Pengcheng Xiong > Attachments: HIVE-1.01.patch, HIVE-1.02.patch, > HIVE-1.03.patch > > > mvn test -Dtest=TestCliDriver -Dtest.output.overwrite=true > -Dqfile=cbo_rp_udf_udaf.q -Dhive.compute.query.using.stats=true repros the > issue. > In StatsOptimizer with return path on, we may have aggr($f0), aggr($f1) in GBY > and then select aggr($f1), aggr($f0) in SEL. > Thus we need to use colExp to find out which position is > corresponding to which position. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13437: -- Attachment: HIVE-13437.02.patch Updated patch. Removes the port variable completely. Makes name mandatory, and adds a log line indicating the port on which the service was started. [~sershe] - please take a look again. > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Bug >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13437.01.patch, HIVE-13437.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13440) remove hiveserver1 scripts under bin/ext/
[ https://issues.apache.org/jira/browse/HIVE-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229287#comment-15229287 ] Vaibhav Gumashta commented on HIVE-13440: - Will also need to remove hive/service/if/hive_service.thrift > remove hiveserver1 scripts under bin/ext/ > - > > Key: HIVE-13440 > URL: https://issues.apache.org/jira/browse/HIVE-13440 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Thejas M Nair > Labels: newbie, trivial > > HIVE-6977 deleted hiveserver1, however the scripts remain under bin/ext/- > ls bin/ext/hiveserver.* > bin/ext/hiveserver.cmd bin/ext/hiveserver.sh > The should be removed as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13333) StatsOptimizer throws ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-1: --- Description: mvn test -Dtest=TestCliDriver -Dtest.output.overwrite=true -Dqfile=cbo_rp_udf_udaf.q -Dhive.compute.query.using.stats=true repros the issue. In StatsOptimizer with return path on, we may have aggr($f0), aggr($f1) in GBY and then select aggr($f1), aggr($f0) in SEL. Thus we need to use colExp to find out which position is corresponding to which position. was:mvn test -Dtest=TestCliDriver -Dtest.output.overwrite=true -Dqfile=cbo_rp_udf_udaf.q -Dhive.compute.query.using.stats=true repros the issue. > StatsOptimizer throws ClassCastException > > > Key: HIVE-1 > URL: https://issues.apache.org/jira/browse/HIVE-1 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 2.0.0 >Reporter: Ashutosh Chauhan >Assignee: Pengcheng Xiong > Attachments: HIVE-1.01.patch, HIVE-1.02.patch, > HIVE-1.03.patch > > > mvn test -Dtest=TestCliDriver -Dtest.output.overwrite=true > -Dqfile=cbo_rp_udf_udaf.q -Dhive.compute.query.using.stats=true repros the > issue. > In StatsOptimizer with return path on, we may have aggr($f0), aggr($f1) in GBY > and then select aggr($f1), aggr($f0) in SEL. > Thus we need to use colExp to find out which position is > corresponding to which position. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12159) Create vectorized readers for the complex types
[ https://issues.apache.org/jira/browse/HIVE-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229282#comment-15229282 ] Hive QA commented on HIVE-12159: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797164/HIVE-12159.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 27 failed/errored test(s), 9955 tests executed *Failed tests:* {noformat} TestCliDriver-index_compact_2.q-vector_grouping_sets.q-join11.q-and-12-more - did not produce a TEST-*.xml file TestSSL - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_data_types org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_decimal_mapjoin org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_reduce_groupby_decimal org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_data_types org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_decimal_mapjoin org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_left_outer_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join5 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_reduce_groupby_decimal org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_data_types org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_decimal_mapjoin org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_left_outer_join org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityPreemption org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropTable org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS org.apache.hive.spark.client.TestSparkClient.testSyncRpc {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7492/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7492/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7492/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 27 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12797164 - PreCommit-HIVE-TRUNK-Build > Create vectorized readers for the complex types > --- > > Key: HIVE-12159 > URL: https://issues.apache.org/jira/browse/HIVE-12159 > Project: Hive > Issue Type: Sub-task >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HIVE-12159.patch > > > We need vectorized readers for the complex types. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13333) StatsOptimizer throws ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229272#comment-15229272 ] Ashutosh Chauhan commented on HIVE-1: - Can you create a RB for this and also brief description of bug which this patch addresses > StatsOptimizer throws ClassCastException > > > Key: HIVE-1 > URL: https://issues.apache.org/jira/browse/HIVE-1 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 2.0.0 >Reporter: Ashutosh Chauhan >Assignee: Pengcheng Xiong > Attachments: HIVE-1.01.patch, HIVE-1.02.patch, > HIVE-1.03.patch > > > mvn test -Dtest=TestCliDriver -Dtest.output.overwrite=true > -Dqfile=cbo_rp_udf_udaf.q -Dhive.compute.query.using.stats=true repros the > issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13440) remove hiveserver1 scripts under bin/ext/
[ https://issues.apache.org/jira/browse/HIVE-13440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-13440: - Assignee: (was: Vaibhav Gumashta) > remove hiveserver1 scripts under bin/ext/ > - > > Key: HIVE-13440 > URL: https://issues.apache.org/jira/browse/HIVE-13440 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Thejas M Nair > Labels: newbie, trivial > > HIVE-6977 deleted hiveserver1, however the scripts remain under bin/ext/- > ls bin/ext/hiveserver.* > bin/ext/hiveserver.cmd bin/ext/hiveserver.sh > The should be removed as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13345) LLAP: metadata cache takes too much space, esp. with bloom filters, due to Java/protobuf overhead
[ https://issues.apache.org/jira/browse/HIVE-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229239#comment-15229239 ] Owen O'Malley commented on HIVE-13345: -- The current leaking of the OrcProto objects outside of the reader implementation is problematic and should be fixed. For fast loading, we should create a ReaderImpl constructor that takes a serialized file tail. The C++ implementation uses: // The contents of the file tail that must be serialized. message FileTail { optional PostScript postscript = 1; optional Footer footer = 2; optional uint64 fileLength = 3; optional uint64 postscriptLength = 4; } I assume you aren't proposing doing hand rolled serialization, which would be very error prone. If I'd seen flatbuffers before I started ORC, I would have been tempted to go that way. Now it would be too much pain for too little gain. > LLAP: metadata cache takes too much space, esp. with bloom filters, due to > Java/protobuf overhead > - > > Key: HIVE-13345 > URL: https://issues.apache.org/jira/browse/HIVE-13345 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin > > We cache java objects currently; these have high overhead, average stripe > metadata takes 200-500Kb on real files, and with bloom filters blowing up > more than x5 due to being stored as list of Long-s, up to 5Mb per stripe. > That is undesirable. > We should either create better objects for ORC (might be good in general) or > store serialized metadata and deserialize when needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-5571) Add support for COMPILING state to OperationState in HiveServer2
[ https://issues.apache.org/jira/browse/HIVE-5571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-5571: --- Assignee: (was: Vaibhav Gumashta) > Add support for COMPILING state to OperationState in HiveServer2 > > > Key: HIVE-5571 > URL: https://issues.apache.org/jira/browse/HIVE-5571 > Project: Hive > Issue Type: Improvement > Components: HiveServer2, JDBC >Reporter: Vaibhav Gumashta > > [HIVE-5441|https://issues.apache.org/jira/browse/HIVE-5441] splits query > execution into compile + run. However, compilation is synchronous whereas > execution can be both synchronous & asynchronous. We should add a new > COMPILING state and expose that to the client as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13439) JDBC: provide a way to retrieve GUID to query Yarn ATS
[ https://issues.apache.org/jira/browse/HIVE-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13439: Status: Patch Available (was: Open) > JDBC: provide a way to retrieve GUID to query Yarn ATS > -- > > Key: HIVE-13439 > URL: https://issues.apache.org/jira/browse/HIVE-13439 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.0.0, 1.2.1 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13439.1.patch > > > HIVE-9673 added support for passing base64 encoded operation handles to ATS. > We should a method on client side to retrieve that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13439) JDBC: provide a way to retrieve GUID to query Yarn ATS
[ https://issues.apache.org/jira/browse/HIVE-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-13439: Attachment: HIVE-13439.1.patch > JDBC: provide a way to retrieve GUID to query Yarn ATS > -- > > Key: HIVE-13439 > URL: https://issues.apache.org/jira/browse/HIVE-13439 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta > Attachments: HIVE-13439.1.patch > > > HIVE-9673 added support for passing base64 encoded operation handles to ATS. > We should a method on client side to retrieve that. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data
[ https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229133#comment-15229133 ] Siddharth Seth commented on HIVE-13391: --- [~sershe] - TaskRunnerCallable already creates a UGI (taskUgi) with the tokens which are passed in over the RPC request. This is passed in to the actual task executor and used to execute the entire task. I think it will be a lot simpler to setup this UGI instances appropriately, instead of modifying TezProcessor, LlapInputFormat, Orc* etc. The entire task runs in the context of this UGI. I don't think we need to retain the token sending behaviour. However, if we are retaining it, we should stop sending the tokens if the LLAP instances are configured to run with keytabs. > add an option to LLAP to use keytab to authenticate to read data > > > Key: HIVE-13391 > URL: https://issues.apache.org/jira/browse/HIVE-13391 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13391.01.patch, HIVE-13391.02.patch, > HIVE-13391.patch > > > This can be used for non-doAs case to allow access to clients who don't > propagate HDFS tokens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6535) JDBC: async wait should happen during fetch for results
[ https://issues.apache.org/jira/browse/HIVE-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-6535: --- Attachment: HIVE-6535.2.patch > JDBC: async wait should happen during fetch for results > --- > > Key: HIVE-6535 > URL: https://issues.apache.org/jira/browse/HIVE-6535 > Project: Hive > Issue Type: Improvement > Components: HiveServer2, JDBC >Affects Versions: 0.14.0, 1.2.1, 2.0.0 >Reporter: Thejas M Nair >Assignee: Vaibhav Gumashta > Attachments: HIVE-6535.1.patch, HIVE-6535.2.patch > > > The hive jdbc client waits query completion during execute() call. It would > be better to block in the jdbc for completion when the results are being > fetched. > This way the application using hive jdbc driver can do other tasks while > asynchronous query execution is happening, until it needs to fetch the result > set. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13149) Remove some unnecessary HMS connections from HS2
[ https://issues.apache.org/jira/browse/HIVE-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13149: Attachment: HIVE-13149.7.patch > Remove some unnecessary HMS connections from HS2 > - > > Key: HIVE-13149 > URL: https://issues.apache.org/jira/browse/HIVE-13149 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13149.1.patch, HIVE-13149.2.patch, > HIVE-13149.3.patch, HIVE-13149.4.patch, HIVE-13149.5.patch, > HIVE-13149.6.patch, HIVE-13149.7.patch > > > In SessionState class, currently we will always try to get a HMS connection > in {{start(SessionState startSs, boolean isAsync, LogHelper console)}} > regardless of if the connection will be used later or not. > When SessionState is accessed by the tasks in TaskRunner.java, although most > of the tasks other than some like StatsTask, don't need to access HMS. > Currently a new HMS connection will be established for each Task thread. If > HiveServer2 is configured to run in parallel and the query involves many > tasks, then the connections are created but unused. > {noformat} > @Override > public void run() { > runner = Thread.currentThread(); > try { > OperationLog.setCurrentOperationLog(operationLog); > SessionState.start(ss); > runSequential(); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13149) Remove some unnecessary HMS connections from HS2
[ https://issues.apache.org/jira/browse/HIVE-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229070#comment-15229070 ] Aihua Xu commented on HIVE-13149: - Those tests actually are not related. Will reattach the same patch to test. > Remove some unnecessary HMS connections from HS2 > - > > Key: HIVE-13149 > URL: https://issues.apache.org/jira/browse/HIVE-13149 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13149.1.patch, HIVE-13149.2.patch, > HIVE-13149.3.patch, HIVE-13149.4.patch, HIVE-13149.5.patch, HIVE-13149.6.patch > > > In SessionState class, currently we will always try to get a HMS connection > in {{start(SessionState startSs, boolean isAsync, LogHelper console)}} > regardless of if the connection will be used later or not. > When SessionState is accessed by the tasks in TaskRunner.java, although most > of the tasks other than some like StatsTask, don't need to access HMS. > Currently a new HMS connection will be established for each Task thread. If > HiveServer2 is configured to run in parallel and the query involves many > tasks, then the connections are created but unused. > {noformat} > @Override > public void run() { > runner = Thread.currentThread(); > try { > OperationLog.setCurrentOperationLog(operationLog); > SessionState.start(ss); > runSequential(); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13149) Remove some unnecessary HMS connections from HS2
[ https://issues.apache.org/jira/browse/HIVE-13149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13149: Attachment: (was: HIVE-13149.7.patch) > Remove some unnecessary HMS connections from HS2 > - > > Key: HIVE-13149 > URL: https://issues.apache.org/jira/browse/HIVE-13149 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13149.1.patch, HIVE-13149.2.patch, > HIVE-13149.3.patch, HIVE-13149.4.patch, HIVE-13149.5.patch, HIVE-13149.6.patch > > > In SessionState class, currently we will always try to get a HMS connection > in {{start(SessionState startSs, boolean isAsync, LogHelper console)}} > regardless of if the connection will be used later or not. > When SessionState is accessed by the tasks in TaskRunner.java, although most > of the tasks other than some like StatsTask, don't need to access HMS. > Currently a new HMS connection will be established for each Task thread. If > HiveServer2 is configured to run in parallel and the query involves many > tasks, then the connections are created but unused. > {noformat} > @Override > public void run() { > runner = Thread.currentThread(); > try { > OperationLog.setCurrentOperationLog(operationLog); > SessionState.start(ss); > runSequential(); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13427) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229058#comment-15229058 ] Szehon Ho commented on HIVE-13427: -- +1 > Update committer list > - > > Key: HIVE-13427 > URL: https://issues.apache.org/jira/browse/HIVE-13427 > Project: Hive > Issue Type: Bug >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Minor > Attachments: HIVE-13427.patch > > > Please update committer list: > Name: Aihua Xu > Apache ID: aihuaxu > Organization: Cloudera > Name: Yongzhi Chen > Apache ID: ychena > Organization: Cloudera -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13427) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13427: Attachment: (was: HIVE-13427.patch) > Update committer list > - > > Key: HIVE-13427 > URL: https://issues.apache.org/jira/browse/HIVE-13427 > Project: Hive > Issue Type: Bug >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Minor > Attachments: HIVE-13427.patch > > > Please update committer list: > Name: Aihua Xu > Apache ID: aihuaxu > Organization: Cloudera > Name: Yongzhi Chen > Apache ID: ychena > Organization: Cloudera -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13427) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13427: Attachment: HIVE-13427.patch > Update committer list > - > > Key: HIVE-13427 > URL: https://issues.apache.org/jira/browse/HIVE-13427 > Project: Hive > Issue Type: Bug >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Minor > Attachments: HIVE-13427.patch > > > Please update committer list: > Name: Aihua Xu > Apache ID: aihuaxu > Organization: Cloudera > Name: Yongzhi Chen > Apache ID: ychena > Organization: Cloudera -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13427) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13427: Attachment: HIVE-13427.patch > Update committer list > - > > Key: HIVE-13427 > URL: https://issues.apache.org/jira/browse/HIVE-13427 > Project: Hive > Issue Type: Bug >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Minor > Attachments: HIVE-13427.patch > > > Please update committer list: > Name: Aihua Xu > Apache ID: aihuaxu > Organization: Cloudera > Name: Yongzhi Chen > Apache ID: ychena > Organization: Cloudera -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13438) Add a service check script for llap
[ https://issues.apache.org/jira/browse/HIVE-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229044#comment-15229044 ] Sergey Shelukhin commented on HIVE-13438: - Hmm... why cannot we make it part of LlapServiceDriver? > Add a service check script for llap > --- > > Key: HIVE-13438 > URL: https://issues.apache.org/jira/browse/HIVE-13438 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-13438.1.patch > > > We want to have a test script that can be run by an installer such as ambari > that makes sure that the service is up and running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data
[ https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229040#comment-15229040 ] Sergey Shelukhin commented on HIVE-13391: - Will fix on commit or in the next iteration, if any > add an option to LLAP to use keytab to authenticate to read data > > > Key: HIVE-13391 > URL: https://issues.apache.org/jira/browse/HIVE-13391 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13391.01.patch, HIVE-13391.02.patch, > HIVE-13391.patch > > > This can be used for non-doAs case to allow access to clients who don't > propagate HDFS tokens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data
[ https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229039#comment-15229039 ] Sergey Shelukhin commented on HIVE-13391: - Failures are unrelated - metastore timeouts. > add an option to LLAP to use keytab to authenticate to read data > > > Key: HIVE-13391 > URL: https://issues.apache.org/jira/browse/HIVE-13391 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13391.01.patch, HIVE-13391.02.patch, > HIVE-13391.patch > > > This can be used for non-doAs case to allow access to clients who don't > propagate HDFS tokens. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13391) add an option to LLAP to use keytab to authenticate to read data
[ https://issues.apache.org/jira/browse/HIVE-13391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15229009#comment-15229009 ] Hive QA commented on HIVE-13391: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797153/HIVE-13391.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 50 failed/errored test(s), 9915 tests executed *Failed tests:* {noformat} TestMiniTezCliDriver-vector_decimal_round.q-cbo_windowing.q-tez_schema_evolution.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testForcedLocalityPreemption org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote.org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testAddPartitions org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testFetchingPartitionsWithDifferentSchemas org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.testGetPartitionSpecs_WithAndWithoutPartitionGrouping org.apache.hadoop.hive.metastore.TestMetaStoreAuthorization.testMetaStoreAuthorization org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccess org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccessWithReadOnly org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore org.apache.hive.hcatalog.api.TestHCatClient.testBasicDDLCommands org.apache.hive.hcatalog.api.TestHCatClient.testCreateTableLike org.apache.hive.hcatalog.api.TestHCatClient.testDatabaseLocation org.apache.hive.hcatalog.api.TestHCatClient.testDropPartitionsWithPartialSpec org.apache.hive.hcatalog.api.TestHCatClient.testDropTableException org.apache.hive.hcatalog.api.TestHCatClient.testEmptyTableInstantiation org.apache.hive.hcatalog.api.TestHCatClient.testGetMessageBusTopicName org.apache.hive.hcatalog.api.TestHCatClient.testGetPartitionsWithPartialSpec org.apache.hive.hcatalog.api.TestHCatClient.testObjectNotFoundException org.apache.hive.hcatalog.api.TestHCatClient.testOtherFailure org.apache.hive.hcatalog.api.TestHCatClient.testPartitionRegistrationWithCustomSchema org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSchema org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema org.apache.hive.hcatalog.api.TestHCatClient.testPartitionsHCatClientImpl org.apache.hive.hcatalog.api.TestHCatClient.testRenameTable org.apache.hive.hcatalog.api.TestHCatClient.testReplicationTaskIter org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure org.apache.hive.hcatalog.api.TestHCatClient.testUpdateTableSchema org.apache.hive.hcatalog.api.repl.commands.TestCommands.org.apache.hive.hcatalog.api.repl.commands.TestCommands org.apache.hive.spark.client.Test
[jira] [Commented] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228974#comment-15228974 ] Sergey Shelukhin commented on HIVE-13437: - Sure, that makes sense +1 > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Bug >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13437.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13394) Analyze table fails in tez on empty partitions/files/tables
[ https://issues.apache.org/jira/browse/HIVE-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-13394: -- Summary: Analyze table fails in tez on empty partitions/files/tables (was: Analyze table fails in tez on empty partitions) > Analyze table fails in tez on empty partitions/files/tables > --- > > Key: HIVE-13394 > URL: https://issues.apache.org/jira/browse/HIVE-13394 > Project: Hive > Issue Type: Bug > Components: Tez >Affects Versions: 1.2.1, 2.0.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Fix For: 1.2.2, 2.1.0, 2.0.1 > > Attachments: HIVE-13394.1.patch, HIVE-13394.2.patch > > > {code} > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:352) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:237) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:252) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150) > ... 14 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.process(GroupByOperator.java:766) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:343) > ... 17 more > Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.hadoop.hive.ql.udf.generic.NumDistinctValueEstimator.deserialize(NumDistinctValueEstimator.java:219) > at > org.apache.hadoop.hive.ql.udf.generic.NumDistinctValueEstimator.(NumDistinctValueEstimator.java:112) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDAFComputeStats$GenericUDAFNumericStatsEvaluator.merge(GenericUDAFComputeStats.java:556) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:188) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.updateAggregations(GroupByOperator.java:612) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:851) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:695) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.process(GroupByOperator.java:761) > ... 18 more > ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 > killedTasks:0, Vertex vertex_145591034_27748_1_01 [Reducer 2] > killed/failed due to:OWN_TASK_FAILURE]DAG did not succeed due to > VERTEX_FAILURE. failedVertices:1 killedVertices:0 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228960#comment-15228960 ] Siddharth Seth commented on HIVE-13437: --- The port field is the initial configuration. I can rename that accordingly if you think this will be confusing. > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Bug >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13437.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13420) Clarify HS2 WebUI Query 'Elapsed TIme'
[ https://issues.apache.org/jira/browse/HIVE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228956#comment-15228956 ] Mohit Sabharwal commented on HIVE-13420: Thanks for the changes! +1 (non-binding) > Clarify HS2 WebUI Query 'Elapsed TIme' > -- > > Key: HIVE-13420 > URL: https://issues.apache.org/jira/browse/HIVE-13420 > Project: Hive > Issue Type: Improvement > Components: Diagnosability >Affects Versions: 2.0.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: Elapsed Time.png, HIVE-13420.2.patch, HIVE-13420.patch, > Patched UI.2.png, Patched UI.png > > > Today the "Queries" section of the WebUI shows SQLOperations that are not > closed. > Elapsed time is thus a bit confusing, people might take this to mean query > runtime, actually it is the time since the operation was opened. The query > may be finished, but operation is not closed. Perhaps another timer column > is needed showing the runtime of the query to reduce this confusion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13320) Apply HIVE-11544 to explicit conversions as well as implicit ones
[ https://issues.apache.org/jira/browse/HIVE-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-13320: --- Assignee: Nita Dembla (was: Gopal V) > Apply HIVE-11544 to explicit conversions as well as implicit ones > - > > Key: HIVE-13320 > URL: https://issues.apache.org/jira/browse/HIVE-13320 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 1.3.0, 1.2.1, 2.0.0, 2.1.0 >Reporter: Gopal V >Assignee: Nita Dembla > Attachments: HIVE-13320.1.patch > > > Parsing 1 million blank values through cast(x as int) is 3x slower than > parsing a valid single digit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13420) Clarify HS2 WebUI Query 'Elapsed TIme'
[ https://issues.apache.org/jira/browse/HIVE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-13420: - Attachment: (was: HIVE-13420.2.patch) > Clarify HS2 WebUI Query 'Elapsed TIme' > -- > > Key: HIVE-13420 > URL: https://issues.apache.org/jira/browse/HIVE-13420 > Project: Hive > Issue Type: Improvement > Components: Diagnosability >Affects Versions: 2.0.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: Elapsed Time.png, HIVE-13420.2.patch, HIVE-13420.patch, > Patched UI.2.png, Patched UI.png > > > Today the "Queries" section of the WebUI shows SQLOperations that are not > closed. > Elapsed time is thus a bit confusing, people might take this to mean query > runtime, actually it is the time since the operation was opened. The query > may be finished, but operation is not closed. Perhaps another timer column > is needed showing the runtime of the query to reduce this confusion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13420) Clarify HS2 WebUI Query 'Elapsed TIme'
[ https://issues.apache.org/jira/browse/HIVE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-13420: - Attachment: HIVE-13420.2.patch > Clarify HS2 WebUI Query 'Elapsed TIme' > -- > > Key: HIVE-13420 > URL: https://issues.apache.org/jira/browse/HIVE-13420 > Project: Hive > Issue Type: Improvement > Components: Diagnosability >Affects Versions: 2.0.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: Elapsed Time.png, HIVE-13420.2.patch, HIVE-13420.patch, > Patched UI.2.png, Patched UI.png > > > Today the "Queries" section of the WebUI shows SQLOperations that are not > closed. > Elapsed time is thus a bit confusing, people might take this to mean query > runtime, actually it is the time since the operation was opened. The query > may be finished, but operation is not closed. Perhaps another timer column > is needed showing the runtime of the query to reduce this confusion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13378) LLAP help formatter is too narrow
[ https://issues.apache.org/jira/browse/HIVE-13378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13378: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Committed to master. Thanks for the review! > LLAP help formatter is too narrow > - > > Key: HIVE-13378 > URL: https://issues.apache.org/jira/browse/HIVE-13378 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.1.0 > > Attachments: HIVE-13378.patch > > > {noformat} > usage: llap > -a,--argsjava arguments to > the llap instance > -c,--cache cache size per > instance > -d,--directory Temp directory for > jars etc. > -e,--executors executor per > instance > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13420) Clarify HS2 WebUI Query 'Elapsed TIme'
[ https://issues.apache.org/jira/browse/HIVE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-13420: - Attachment: HIVE-13420.2.patch Patched UI.2.png Thanks Mohit for the suggestions, made some changes. One additional change is to remove 'Query' from even other sub-titles to eliminate redundancy, as suggested for the 'Query Drilldown' link title. For 'Operations', I think introducing that term in this UI might lead to some confusion for the non-technical user, so I errored on the side of caution and decided not to use that word either. I think the concept of 'opened' and 'closed' queries is the only one that UI will present, and in terms of SQLOperation is equivalent. > Clarify HS2 WebUI Query 'Elapsed TIme' > -- > > Key: HIVE-13420 > URL: https://issues.apache.org/jira/browse/HIVE-13420 > Project: Hive > Issue Type: Improvement > Components: Diagnosability >Affects Versions: 2.0.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: Elapsed Time.png, HIVE-13420.2.patch, HIVE-13420.patch, > Patched UI.2.png, Patched UI.png > > > Today the "Queries" section of the WebUI shows SQLOperations that are not > closed. > Elapsed time is thus a bit confusing, people might take this to mean query > runtime, actually it is the time since the operation was opened. The query > may be finished, but operation is not closed. Perhaps another timer column > is needed showing the runtime of the query to reduce this confusion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228915#comment-15228915 ] Sergey Shelukhin commented on HIVE-13437: - Hmm... then what is the port field for? Will this still be correct for static port? > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Bug >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13437.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC
[ https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-9660: --- Attachment: HIVE-9660.05.patch Small update. > store end offset of compressed data for RG in RowIndex in ORC > - > > Key: HIVE-9660 > URL: https://issues.apache.org/jira/browse/HIVE-9660 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-9660.01.patch, HIVE-9660.02.patch, > HIVE-9660.03.patch, HIVE-9660.04.patch, HIVE-9660.05.patch, HIVE-9660.patch, > HIVE-9660.patch > > > Right now the end offset is estimated, which in some cases results in tons of > extra data being read. > We can add a separate array to RowIndex (positions_v2?) that stores number of > compressed buffers for each RG, or end offset, or something, to remove this > estimation magic -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13401) Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token authentication
[ https://issues.apache.org/jira/browse/HIVE-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228897#comment-15228897 ] Sergey Shelukhin commented on HIVE-13401: - Thanks for the backport! > Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token > authentication > > > Key: HIVE-13401 > URL: https://issues.apache.org/jira/browse/HIVE-13401 > Project: Hive > Issue Type: Bug > Components: Authentication >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Fix For: 2.1.0, 2.0.1 > > Attachments: HIVE-13401-branch2.0.1.patch, HIVE-13401.patch > > > When HS2 is running in kerberos cluster but with other Sasl authentication > (e.g. LDAP) enabled, it fails in kerberos/delegation token authentication. It > is because the HS2 server uses the TSetIpAddressProcess when other > authentication is enabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13401) Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token authentication
[ https://issues.apache.org/jira/browse/HIVE-13401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228896#comment-15228896 ] Sergey Shelukhin commented on HIVE-13401: - Hmm, I made another commit of the same patch, reverting it now. > Kerberized HS2 with LDAP auth enabled fails kerberos/delegation token > authentication > > > Key: HIVE-13401 > URL: https://issues.apache.org/jira/browse/HIVE-13401 > Project: Hive > Issue Type: Bug > Components: Authentication >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Fix For: 2.1.0, 2.0.1 > > Attachments: HIVE-13401-branch2.0.1.patch, HIVE-13401.patch > > > When HS2 is running in kerberos cluster but with other Sasl authentication > (e.g. LDAP) enabled, it fails in kerberos/delegation token authentication. It > is because the HS2 server uses the TSetIpAddressProcess when other > authentication is enabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13438) Add a service check script for llap
[ https://issues.apache.org/jira/browse/HIVE-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-13438: -- Status: Patch Available (was: Open) > Add a service check script for llap > --- > > Key: HIVE-13438 > URL: https://issues.apache.org/jira/browse/HIVE-13438 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-13438.1.patch > > > We want to have a test script that can be run by an installer such as ambari > that makes sure that the service is up and running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13438) Add a service check script for llap
[ https://issues.apache.org/jira/browse/HIVE-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228863#comment-15228863 ] Vikram Dixit K commented on HIVE-13438: --- [~hagleitn] Could you please take a look? Thanks! > Add a service check script for llap > --- > > Key: HIVE-13438 > URL: https://issues.apache.org/jira/browse/HIVE-13438 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-13438.1.patch > > > We want to have a test script that can be run by an installer such as ambari > that makes sure that the service is up and running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13438) Add a service check script for llap
[ https://issues.apache.org/jira/browse/HIVE-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-13438: -- Attachment: HIVE-13438.1.patch > Add a service check script for llap > --- > > Key: HIVE-13438 > URL: https://issues.apache.org/jira/browse/HIVE-13438 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-13438.1.patch > > > We want to have a test script that can be run by an installer such as ambari > that makes sure that the service is up and running. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228806#comment-15228806 ] Prasanth Jayachandran commented on HIVE-13437: -- +1 > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Bug >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13437.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13429) Tool to remove dangling scratch dir
[ https://issues.apache.org/jira/browse/HIVE-13429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228807#comment-15228807 ] Thejas M Nair commented on HIVE-13429: -- [~daijy] Can you please add a reviewboard link or githhub pull request for review ? > Tool to remove dangling scratch dir > --- > > Key: HIVE-13429 > URL: https://issues.apache.org/jira/browse/HIVE-13429 > Project: Hive > Issue Type: Improvement >Reporter: Daniel Dai >Assignee: Daniel Dai > Attachments: HIVE-13429.1.patch > > > We have seen in some cases, user will leave the scratch dir behind, and > eventually eat out hdfs storage. This could happen when vm restarts and leave > no chance for Hive to run shutdown hook. This is applicable for both HiveCli > and HiveServer2. Here we provide an external tool to clear dead scratch dir > as needed. > We need a way to identify which scratch dir is in use. We will rely on HDFS > write lock for that. Here is how HDFS write lock works: > 1. A HDFS client open HDFS file for write and only close at the time of > shutdown > 2. Cleanup process can try to open HDFS file for write. If the client holding > this file is still running, we will get exception. Otherwise, we know the > client is dead > 3. If the HDFS client dies without closing the HDFS file, NN will reclaim the > lease after 10 min, ie, the HDFS file hold by the dead client is writable > again after 10 min > So here is how we remove dangling scratch directory in Hive: > 1. HiveCli/HiveServer2 opens a well-named lock file in scratch directory and > only close it when we about to drop scratch directory > 2. A command line tool cleardanglingscratchdir will check every scratch > directory and try open the lock file for write. If it does not get exception, > meaning the owner is dead and we can safely remove the scratch directory > 3. The 10 min window means it is possible a HiveCli/HiveServer2 is dead but > we still cannot reclaim the scratch directory for another 10 min. But this > should be tolerable -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13437: -- Attachment: HIVE-13437.01.patch Trivial patch. [~sershe], [~prasanth_j] - could you please take a look. > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Bug >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13437.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13437: -- Status: Patch Available (was: Open) > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Bug >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13437.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13437: -- Issue Type: Bug (was: Improvement) > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Bug >Reporter: Siddharth Seth >Assignee: Siddharth Seth > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth reassigned HIVE-13437: - Assignee: Siddharth Seth > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Improvement >Reporter: Siddharth Seth >Assignee: Siddharth Seth > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13437) httpserver getPort does not return the actual port when attempting to use a dynamic port
[ https://issues.apache.org/jira/browse/HIVE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13437: -- Summary: httpserver getPort does not return the actual port when attempting to use a dynamic port (was: Allow dynamic ports for HttpServer) > httpserver getPort does not return the actual port when attempting to use a > dynamic port > > > Key: HIVE-13437 > URL: https://issues.apache.org/jira/browse/HIVE-13437 > Project: Hive > Issue Type: Improvement >Reporter: Siddharth Seth > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13420) Clarify HS2 WebUI Query 'Elapsed TIme'
[ https://issues.apache.org/jira/browse/HIVE-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228794#comment-15228794 ] Mohit Sabharwal commented on HIVE-13420: LGTM. Some suggestions on naming, just so it's more clear to the user. 1) "End Time" -> "Operation Close Timestamp" Also, since this is the time when op was closed, instead of saying "In Progress" as the default value, we can just say "Open" (since we don't know if the op is running) 2) "Query Runtime (s) " -> "Latency (s)" 3) "Opened (s)" -> "Operation Open For (s)" We may want to put columns 1) and 3) next to each other since they are related (both depend on client closing the session/operation). The word Query in "Query Drilldown" seems redundant - we could just say "Drilldown" > Clarify HS2 WebUI Query 'Elapsed TIme' > -- > > Key: HIVE-13420 > URL: https://issues.apache.org/jira/browse/HIVE-13420 > Project: Hive > Issue Type: Improvement > Components: Diagnosability >Affects Versions: 2.0.0 >Reporter: Szehon Ho >Assignee: Szehon Ho > Attachments: Elapsed Time.png, HIVE-13420.patch, Patched UI.png > > > Today the "Queries" section of the WebUI shows SQLOperations that are not > closed. > Elapsed time is thus a bit confusing, people might take this to mean query > runtime, actually it is the time since the operation was opened. The query > may be finished, but operation is not closed. Perhaps another timer column > is needed showing the runtime of the query to reduce this confusion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13436) Allow the package directory to be specified for the llap setup script
[ https://issues.apache.org/jira/browse/HIVE-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228791#comment-15228791 ] Sergey Shelukhin commented on HIVE-13436: - I am assuming this is implemented in the python script. +1 if so > Allow the package directory to be specified for the llap setup script > - > > Key: HIVE-13436 > URL: https://issues.apache.org/jira/browse/HIVE-13436 > Project: Hive > Issue Type: Improvement >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13436.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10249) ACID: show locks should show who the lock is waiting for
[ https://issues.apache.org/jira/browse/HIVE-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228792#comment-15228792 ] Wei Zheng commented on HIVE-10249: -- [~ekoifman] I noticed in patch 3, dbtxnmgr_showlocks.q.out missed the last column Hostname.. > ACID: show locks should show who the lock is waiting for > > > Key: HIVE-10249 > URL: https://issues.apache.org/jira/browse/HIVE-10249 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Fix For: 1.3.0, 2.1.0 > > Attachments: HIVE-10249.2.patch, HIVE-10249.3.patch, HIVE-10249.patch > > > instead of just showing state WAITING, we should include what the lock is > waiting for. It will make diagnostics easier. > It would also be useful to add QueryPlan.getQueryId() so it's easy to see > which query the lock belongs to. > # need to store this in HIVE_LOCKS (additional field); this has a perf hit to > do another update on failed attempt and to clear filed on successful attempt. > (Actually on success, we update anyway). How exactly would this be > displayed? Each lock can block but we acquire all parts of external lock at > once. Since we stop at first one that blocked, we’d only update that one… > # This needs a matching Thrift change to pass to client: ShowLocksResponse > # Perhaps we can start updating this info after lock was in W state for some > time to reduce perf hit. > # This is mostly useful for “Why is my query stuck” -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13436) Allow the package directory to be specified for the llap setup script
[ https://issues.apache.org/jira/browse/HIVE-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13436: -- Status: Patch Available (was: Open) > Allow the package directory to be specified for the llap setup script > - > > Key: HIVE-13436 > URL: https://issues.apache.org/jira/browse/HIVE-13436 > Project: Hive > Issue Type: Improvement >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13436.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13436) Allow the package directory to be specified for the llap setup script
[ https://issues.apache.org/jira/browse/HIVE-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13436: -- Attachment: HIVE-13436.1.patch This is mostly supported already. The patch just adds the "--output" parameter to LlapOptionsProcessor. [~sershe] - please review. > Allow the package directory to be specified for the llap setup script > - > > Key: HIVE-13436 > URL: https://issues.apache.org/jira/browse/HIVE-13436 > Project: Hive > Issue Type: Improvement >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13436.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13398) LLAP: Simple /status and /peers web services
[ https://issues.apache.org/jira/browse/HIVE-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228773#comment-15228773 ] Sergey Shelukhin commented on HIVE-13398: - +1 > LLAP: Simple /status and /peers web services > > > Key: HIVE-13398 > URL: https://issues.apache.org/jira/browse/HIVE-13398 > Project: Hive > Issue Type: Improvement > Components: llap >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-13398.02.patch, HIVE-13398.1.patch > > > MiniLLAP doesn't have a UI service, so this has no easy tests. > {code} > curl localhost:15002/status > { > "status" : "STARTED", > "uptime" : 139093, > "build" : "2.1.0-SNAPSHOT from 77474581df4016e3899a986e079513087a945674 by > gopal source checksum a9caa5faad5906d5139c33619f1368bb" > } > {code} > {code} > curl localhost:15002/peers > { > "dynamic" : true, > "identity" : "718264f1-722e-40f1-8265-ac25587bf336", > "peers" : [ > { > "identity" : "940d6838-4dd7-4e85-95cc-5a6a2c537c04", > "host" : "sandbox121.hortonworks.com", > "management-port" : 15004, > "rpc-port" : 15001, > "shuffle-port" : 15551, > "resource" : { > "vcores" : 24, > "memory" : 128000 > }, > "host" : "sandbox121.hortonworks.com" > }, > ] > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13398) LLAP: Simple /status and /peers web services
[ https://issues.apache.org/jira/browse/HIVE-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13398: -- Attachment: HIVE-13398.02.patch Rebased patch. Also removed the security check on the /status page. [~sershe] - could you please take a quick look for sanity. > LLAP: Simple /status and /peers web services > > > Key: HIVE-13398 > URL: https://issues.apache.org/jira/browse/HIVE-13398 > Project: Hive > Issue Type: Improvement > Components: llap >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-13398.02.patch, HIVE-13398.1.patch > > > MiniLLAP doesn't have a UI service, so this has no easy tests. > {code} > curl localhost:15002/status > { > "status" : "STARTED", > "uptime" : 139093, > "build" : "2.1.0-SNAPSHOT from 77474581df4016e3899a986e079513087a945674 by > gopal source checksum a9caa5faad5906d5139c33619f1368bb" > } > {code} > {code} > curl localhost:15002/peers > { > "dynamic" : true, > "identity" : "718264f1-722e-40f1-8265-ac25587bf336", > "peers" : [ > { > "identity" : "940d6838-4dd7-4e85-95cc-5a6a2c537c04", > "host" : "sandbox121.hortonworks.com", > "management-port" : 15004, > "rpc-port" : 15001, > "shuffle-port" : 15551, > "resource" : { > "vcores" : 24, > "memory" : 128000 > }, > "host" : "sandbox121.hortonworks.com" > }, > ] > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13407) Add more subtlety to TezCompiler Perf Logging
[ https://issues.apache.org/jira/browse/HIVE-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228727#comment-15228727 ] Ashutosh Chauhan commented on HIVE-13407: - Actually instead of {{PerfLogger.Optimizer}} we shall use {{PerfLogger.TezCompiler}} for these logs. > Add more subtlety to TezCompiler Perf Logging > - > > Key: HIVE-13407 > URL: https://issues.apache.org/jira/browse/HIVE-13407 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13407.1.patch, HIVE-13407.2.patch > > > We can add more subtlety to perf logging information in TezCompiler -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13341) Stats state is not captured correctly: differentiate load table and create table
[ https://issues.apache.org/jira/browse/HIVE-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-13341: --- Status: Open (was: Patch Available) > Stats state is not captured correctly: differentiate load table and create > table > > > Key: HIVE-13341 > URL: https://issues.apache.org/jira/browse/HIVE-13341 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer, Statistics >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-13341.01.patch, HIVE-13341.02.patch, > HIVE-13341.03.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13341) Stats state is not captured correctly: differentiate load table and create table
[ https://issues.apache.org/jira/browse/HIVE-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-13341: --- Attachment: HIVE-13341.03.patch > Stats state is not captured correctly: differentiate load table and create > table > > > Key: HIVE-13341 > URL: https://issues.apache.org/jira/browse/HIVE-13341 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer, Statistics >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-13341.01.patch, HIVE-13341.02.patch, > HIVE-13341.03.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13341) Stats state is not captured correctly: differentiate load table and create table
[ https://issues.apache.org/jira/browse/HIVE-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-13341: --- Status: Patch Available (was: Open) > Stats state is not captured correctly: differentiate load table and create > table > > > Key: HIVE-13341 > URL: https://issues.apache.org/jira/browse/HIVE-13341 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer, Statistics >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-13341.01.patch, HIVE-13341.02.patch, > HIVE-13341.03.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13407) Add more subtlety to TezCompiler Perf Logging
[ https://issues.apache.org/jira/browse/HIVE-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228698#comment-15228698 ] Ashutosh Chauhan commented on HIVE-13407: - +1 please paste sample output from this new logging. > Add more subtlety to TezCompiler Perf Logging > - > > Key: HIVE-13407 > URL: https://issues.apache.org/jira/browse/HIVE-13407 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13407.1.patch, HIVE-13407.2.patch > > > We can add more subtlety to perf logging information in TezCompiler -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11609) Capability to add a filter to hbase scan via composite key doesn't work
[ https://issues.apache.org/jira/browse/HIVE-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HIVE-11609: Attachment: HIVE-11609.6.patch.txt > Capability to add a filter to hbase scan via composite key doesn't work > --- > > Key: HIVE-11609 > URL: https://issues.apache.org/jira/browse/HIVE-11609 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Reporter: Swarnim Kulkarni >Assignee: Swarnim Kulkarni > Attachments: HIVE-11609.1.patch.txt, HIVE-11609.2.patch.txt, > HIVE-11609.3.patch.txt, HIVE-11609.4.patch.txt, HIVE-11609.5.patch, > HIVE-11609.6.patch.txt > > > It seems like the capability to add filter to an hbase scan which was added > as part of HIVE-6411 doesn't work. This is primarily because in the > HiveHBaseInputFormat, the filter is added in the getsplits instead of > getrecordreader. This works fine for start and stop keys but not for filter > because a filter is respected only when an actual scan is performed. This is > also related to the initial refactoring that was done as part of HIVE-3420. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11609) Capability to add a filter to hbase scan via composite key doesn't work
[ https://issues.apache.org/jira/browse/HIVE-11609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swarnim Kulkarni updated HIVE-11609: Attachment: (was: HIVE-11609.6.patch.txt) > Capability to add a filter to hbase scan via composite key doesn't work > --- > > Key: HIVE-11609 > URL: https://issues.apache.org/jira/browse/HIVE-11609 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Reporter: Swarnim Kulkarni >Assignee: Swarnim Kulkarni > Attachments: HIVE-11609.1.patch.txt, HIVE-11609.2.patch.txt, > HIVE-11609.3.patch.txt, HIVE-11609.4.patch.txt, HIVE-11609.5.patch > > > It seems like the capability to add filter to an hbase scan which was added > as part of HIVE-6411 doesn't work. This is primarily because in the > HiveHBaseInputFormat, the filter is added in the getsplits instead of > getrecordreader. This works fine for start and stop keys but not for filter > because a filter is respected only when an actual scan is performed. This is > also related to the initial refactoring that was done as part of HIVE-3420. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9660) store end offset of compressed data for RG in RowIndex in ORC
[ https://issues.apache.org/jira/browse/HIVE-9660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228695#comment-15228695 ] Hive QA commented on HIVE-9660: --- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797148/HIVE-9660.04.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7490/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7490/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7490/ Messages: {noformat} This message was trimmed, see log for full details [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-github-source-source/shims/aggregator/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-shims --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-shims --- [INFO] No sources to compile [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-shims --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-github-source-source/shims/aggregator/src/test/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-shims --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/shims/aggregator/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/shims/aggregator/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/shims/aggregator/target/tmp/conf [copy] Copying 15 files to /data/hive-ptest/working/apache-github-source-source/shims/aggregator/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-shims --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-shims --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-shims --- [INFO] Building jar: /data/hive-ptest/working/apache-github-source-source/shims/aggregator/target/hive-shims-2.1.0-SNAPSHOT.jar [INFO] [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ hive-shims --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-shims --- [INFO] Installing /data/hive-ptest/working/apache-github-source-source/shims/aggregator/target/hive-shims-2.1.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/hive-shims/2.1.0-SNAPSHOT/hive-shims-2.1.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-github-source-source/shims/aggregator/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/hive-shims/2.1.0-SNAPSHOT/hive-shims-2.1.0-SNAPSHOT.pom [INFO] [INFO] [INFO] Building Hive Storage API 2.1.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-storage-api --- [INFO] Deleting /data/hive-ptest/working/apache-github-source-source/storage-api/target [INFO] Deleting /data/hive-ptest/working/apache-github-source-source/storage-api (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ hive-storage-api --- [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-storage-api --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-storage-api --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-github-source-source/storage-api/src/main/resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-storage-api --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-storage-api --- [INFO] Compiling 35 source files to /data/hive-ptest/working/apache-github-source-source/storage-api/target/classes [WARNING] /data/hive-ptest/working/apache-github-source-source/storage-api/src/java/org/apache/hadoop/hive/ql/exec/vector/IntervalDayTimeColumnVector.java:[29,51] sun.util.calendar.BaseCalendar is internal proprietary API and may be
[jira] [Commented] (HIVE-13407) Add more subtlety to TezCompiler Perf Logging
[ https://issues.apache.org/jira/browse/HIVE-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15228686#comment-15228686 ] Hive QA commented on HIVE-13407: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12797139/HIVE-13407.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 9945 tests executed *Failed tests:* {noformat} TestCustomAuthentication - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testPreemptionQueueComparator org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.lockConflictDbTable org.apache.hadoop.hive.ql.security.TestAuthorizationPreEventListener.testListener org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivileges org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccess org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testMetastoreProxyUser org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testSaslWithHiveMetaStore org.apache.hive.jdbc.TestSSL.testSSLFetchHttp org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.org.apache.hive.service.TestHS2ImpersonationWithRemoteMS org.apache.hive.spark.client.TestSparkClient.testAddJarsAndFiles org.apache.hive.spark.client.TestSparkClient.testCounters org.apache.hive.spark.client.TestSparkClient.testErrorJob org.apache.hive.spark.client.TestSparkClient.testJobSubmission org.apache.hive.spark.client.TestSparkClient.testMetricsCollection org.apache.hive.spark.client.TestSparkClient.testSimpleSparkJob org.apache.hive.spark.client.TestSparkClient.testSyncRpc {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7489/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/7489/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-7489/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 35 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12797139 - PreCommit-HIVE-TRUNK-Build > Add more subtlety to TezCompiler Perf Logging > - > > Key: HIVE-13407 > URL: https://issues.apache.org/jira/browse/HIVE-13407 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Attachments: HIVE-13407.1.patch, HIVE-13407.2.patch > > > We can add more subtlety to perf logging information in TezCompiler -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13434) BaseSemanticAnalyzer.unescapeSQLString doesn't unescape \u0000 style character literals.
[ https://issues.apache.org/jira/browse/HIVE-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kousuke Saruta updated HIVE-13434: -- Summary: BaseSemanticAnalyzer.unescapeSQLString doesn't unescape \u style character literals. (was: BaseSemanticAnalyzer.unescapeSQLString doesn't unescape \ style character literals.) > BaseSemanticAnalyzer.unescapeSQLString doesn't unescape \u style > character literals. > > > Key: HIVE-13434 > URL: https://issues.apache.org/jira/browse/HIVE-13434 > Project: Hive > Issue Type: Bug > Components: Parser >Affects Versions: 2.1.0 >Reporter: Kousuke Saruta >Assignee: Kousuke Saruta > Attachments: HIVE-13434.1.patch > > > BaseSemanticAnalyzer.unescapeSQLString method may have a fault. When "\u0061" > style character literals are passed to the method, it's not unescaped > successfully. > In Spark SQL project, we referenced the unescaping logic and noticed this > issue (SPARK-14426) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-13427) Update committer list
[ https://issues.apache.org/jira/browse/HIVE-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu reassigned HIVE-13427: --- Assignee: Aihua Xu > Update committer list > - > > Key: HIVE-13427 > URL: https://issues.apache.org/jira/browse/HIVE-13427 > Project: Hive > Issue Type: Bug >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Minor > > Please update committer list: > Name: Aihua Xu > Apache ID: aihuaxu > Organization: Cloudera > Name: Yongzhi Chen > Apache ID: ychena > Organization: Cloudera -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10280) LLAP: Handle errors while sending source state updates to the daemons
[ https://issues.apache.org/jira/browse/HIVE-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-10280: -- Resolution: Fixed Status: Resolved (was: Patch Available) Done. Thanks for catching this [~leftylev] > LLAP: Handle errors while sending source state updates to the daemons > - > > Key: HIVE-10280 > URL: https://issues.apache.org/jira/browse/HIVE-10280 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-10280.1.patch > > > Will likely be handled as marking the node as bad. May need a retry policy in > place though before marking a node bad to handle temporary network glitches. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13435) Hive fails to read timestamp stored as binary / int64 from externally generated parquet files
[ https://issues.apache.org/jira/browse/HIVE-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated HIVE-13435: Description: If timestamp in parquet file is stored as binary or int64 (ex: parquet file wasn't created using Hive), Hive fails to read such parquet files, as it expects timestamp to be only in int96. It would be nice if Hive can read such files. was: If timestamp in parquet file is stored as binary / int64 (ex: parquet file wasn't created using Hive), Hive fails to read such parquet files, as it expects timestamp to be only in int96. It would be nice if Hive can read such files. > Hive fails to read timestamp stored as binary / int64 from externally > generated parquet files > - > > Key: HIVE-13435 > URL: https://issues.apache.org/jira/browse/HIVE-13435 > Project: Hive > Issue Type: Improvement >Reporter: Arina Ielchiieva > > If timestamp in parquet file is stored as binary or int64 (ex: parquet file > wasn't created using Hive), Hive fails to read such parquet files, as it > expects timestamp to be only in int96. > It would be nice if Hive can read such files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13435) Hive fails to read timestamp stored as binary / int64 from externally generated parquet files
[ https://issues.apache.org/jira/browse/HIVE-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated HIVE-13435: Description: If timestamp in parquet file is stored as binary / int64 (ex: parquet file wasn't created using Hive), Hive fails to read such parquet files, as it expects timestamp to be only in int96. It would be nice if Hive can read such files. was: If timestamp in parquet file is stored as binary (ex: parquet file wasn't created using Hive), Hive fails to read such parquet files, as it expects timestamp to be only in int96. It would be nice if Hive can read such files. > Hive fails to read timestamp stored as binary / int64 from externally > generated parquet files > - > > Key: HIVE-13435 > URL: https://issues.apache.org/jira/browse/HIVE-13435 > Project: Hive > Issue Type: Improvement >Reporter: Arina Ielchiieva > > If timestamp in parquet file is stored as binary / int64 (ex: parquet file > wasn't created using Hive), Hive fails to read such parquet files, as it > expects timestamp to be only in int96. > It would be nice if Hive can read such files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13435) Hive fails to read timestamp stored as binary / int64 from externally generated parquet files
[ https://issues.apache.org/jira/browse/HIVE-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arina Ielchiieva updated HIVE-13435: Summary: Hive fails to read timestamp stored as binary / int64 from externally generated parquet files (was: Hive fails to read timestamp stored as binary from externally generated parquet files) > Hive fails to read timestamp stored as binary / int64 from externally > generated parquet files > - > > Key: HIVE-13435 > URL: https://issues.apache.org/jira/browse/HIVE-13435 > Project: Hive > Issue Type: Improvement >Reporter: Arina Ielchiieva > > If timestamp in parquet file is stored as binary (ex: parquet file wasn't > created using Hive), Hive fails to read such parquet files, as it expects > timestamp to be only in int96. > It would be nice if Hive can read such files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)