[jira] [Created] (HIVE-26909) Backport of HIVE-20715: Disable test: udaf_histogram_numeric
Aman Raj created HIVE-26909: --- Summary: Backport of HIVE-20715: Disable test: udaf_histogram_numeric Key: HIVE-26909 URL: https://issues.apache.org/jira/browse/HIVE-26909 Project: Hive Issue Type: Sub-task Reporter: Aman Raj Assignee: Aman Raj -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26908) Disable Initiator on HMS instance at the same time enable Cleaner thread
Taraka Rama Rao Lethavadla created HIVE-26908: - Summary: Disable Initiator on HMS instance at the same time enable Cleaner thread Key: HIVE-26908 URL: https://issues.apache.org/jira/browse/HIVE-26908 Project: Hive Issue Type: New Feature Components: Standalone Metastore Reporter: Taraka Rama Rao Lethavadla Assignee: Taraka Rama Rao Lethavadla In the current implementation, both Initiator and Cleaner are either enabled or disabled using the same config {noformat} hive.compactor.initiator.on{noformat} So there is no way to selectively disable initiator and enable cleaner or vice versa. Introducing another config to handle Cleaner thread alone like hive.compactor.cleaner.on -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26907) Backport of HIVE-20741: Disable udaf_context_ngrams.q and udaf_corr.q tests
Aman Raj created HIVE-26907: --- Summary: Backport of HIVE-20741: Disable udaf_context_ngrams.q and udaf_corr.q tests Key: HIVE-26907 URL: https://issues.apache.org/jira/browse/HIVE-26907 Project: Hive Issue Type: Sub-task Reporter: Aman Raj Assignee: Aman Raj -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26906) Backport of HIVE-19313 to branch-3 : TestJdbcWithDBTokenStoreNoDoAs tests are failing
Aman Raj created HIVE-26906: --- Summary: Backport of HIVE-19313 to branch-3 : TestJdbcWithDBTokenStoreNoDoAs tests are failing Key: HIVE-26906 URL: https://issues.apache.org/jira/browse/HIVE-26906 Project: Hive Issue Type: Sub-task Reporter: Aman Raj Assignee: Aman Raj # org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs ERROR : {{Error Could not open client transport with JDBC Uri: jdbc:hive2://localhost:42959/default;auth=delegationToken: Peer indicated failure: DIGEST-MD5: IO error acquiring password Stacktrace java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:42959/default;auth=delegationToken: Peer indicated failure: DIGEST-MD5: IO error acquiring password at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:269) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:270) at org.apache.hive.minikdc.TestJdbcWithMiniKdc.testTokenAuth(TestJdbcWithMiniKdc.java:172) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26905) Backport HIVE-25173 to 3.2.0: Exclude pentaho-aggdesigner-algorithm from upgrade-acid build.
Chris Nauroth created HIVE-26905: Summary: Backport HIVE-25173 to 3.2.0: Exclude pentaho-aggdesigner-algorithm from upgrade-acid build. Key: HIVE-26905 URL: https://issues.apache.org/jira/browse/HIVE-26905 Project: Hive Issue Type: Bug Components: Build Infrastructure Reporter: Chris Nauroth Assignee: Chris Nauroth In the current branch-3, upgrade-acid has a dependency on an old hive-exec version that has a transitive dependency to org.pentaho:pentaho-aggdesigner-algorithm. This artifact is no longer available in commonly supported Maven repositories, which causes a build failure. We can safely exclude the dependency, as was originally done in HIVE-25173. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26904) QueryCompactor failed in commitCompaction if the tmp table dir is already removed
Quanlong Huang created HIVE-26904: - Summary: QueryCompactor failed in commitCompaction if the tmp table dir is already removed Key: HIVE-26904 URL: https://issues.apache.org/jira/browse/HIVE-26904 Project: Hive Issue Type: Bug Reporter: Quanlong Huang Assignee: Quanlong Huang commitCompaction() of query-based compactions just remove the dirs of tmp tables. It should not fail the compaction if the dirs are already removed. We've seen such a failure in Impala's test (IMPALA-11756): {noformat} 2023-01-02T02:09:26,306 INFO [HiveServer2-Background-Pool: Thread-695] ql.Driver: Executing command(queryId=jenkins_20230102020926_69112755-b783-4214-89e5-1c7111dfe15f): alter table partial_catalog_info_test.insert_only_partitioned partition (part=1) compact 'minor' and wait 2023-01-02T02:09:26,306 INFO [HiveServer2-Background-Pool: Thread-695] ql.Driver: Starting task [Stage-0:DDL] in serial mode 2023-01-02T02:09:26,317 INFO [HiveServer2-Background-Pool: Thread-695] exec.Task: Compaction enqueued with id 15 ... 2023-01-02T02:12:55,849 ERROR [impala-ec2-centos79-m6i-4xlarge-ondemand-1428.vpc.cloudera.com-48_executor] compactor.Worker: Caught exception while trying to compact id:15,dbname:partial_catalog_info_test,tableName:insert_only_partitioned,partName:part=1,state:^@,type:MINOR,enqueueTime:0,start:0,properties:null,runAs:jenkins,tooManyAborts:false,hasOldAbort:false,highestWriteId:3,errorMessage:null,workerId: null,initiatorId: null,retryRetention0. Marking failed to avoid repeated failures java.io.FileNotFoundException: File hdfs://localhost:20500/tmp/hive/jenkins/092b533a-81c8-4b95-88e4-9472cf6f365d/_tmp_space.db/62ec04fb-e2d2-4a99-a454-ae709a3cccfe does not exist. at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1275) ~[hadoop-hdfs-client-3.1.1.7.2.15.4-6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1249) ~[hadoop-hdfs-client-3.1.1.7.2.15.4-6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1194) ~[hadoop-hdfs-client-3.1.1.7.2.15.4-6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1190) ~[hadoop-hdfs-client-3.1.1.7.2.15.4-6.jar:?] at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[hadoop-common-3.1.1.7.2.15.4-6.jar:?] at org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:1208) ~[hadoop-hdfs-client-3.1.1.7.2.15.4-6.jar:?] at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2144) ~[hadoop-common-3.1.1.7.2.15.4-6.jar:?] at org.apache.hadoop.fs.FileSystem$5.(FileSystem.java:2302) ~[hadoop-common-3.1.1.7.2.15.4-6.jar:?] at org.apache.hadoop.fs.FileSystem.listFiles(FileSystem.java:2299) ~[hadoop-common-3.1.1.7.2.15.4-6.jar:?] at org.apache.hadoop.hive.ql.txn.compactor.QueryCompactor$Util.cleanupEmptyDir(QueryCompactor.java:261) ~[hive-exec-3.1.3000.2022.0.13.0-60.jar:3.1.3000.2022.0.13.0-60] at org.apache.hadoop.hive.ql.txn.compactor.MmMinorQueryCompactor.commitCompaction(MmMinorQueryCompactor.java:72) ~[hive-exec-3.1.3000.2022.0.13.0-60.jar:3.1.3000.2022.0.13.0-60] at org.apache.hadoop.hive.ql.txn.compactor.QueryCompactor.runCompactionQueries(QueryCompactor.java:146) ~[hive-exec-3.1.3000.2022.0.13.0-60.jar:3.1.3000.2022.0.13.0-60] at org.apache.hadoop.hive.ql.txn.compactor.MmMinorQueryCompactor.runCompaction(MmMinorQueryCompactor.java:63) ~[hive-exec-3.1.3000.2022.0.13.0-60.jar:3.1.3000.2022.0.13.0-60] at org.apache.hadoop.hive.ql.txn.compactor.Worker.findNextCompactionAndExecute(Worker.java:435) ~[hive-exec-3.1.3000.2022.0.13.0-60.jar:3.1.3000.2022.0.13.0-60] at org.apache.hadoop.hive.ql.txn.compactor.Worker.lambda$run$0(Worker.java:115) ~[hive-exec-3.1.3000.2022.0.13.0-60.jar:3.1.3000.2022.0.13.0-60] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_261] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_261] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_261] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_261] 2023-01-02T02:12:55,858 INFO [impala-ec2-centos79-m6i-4xlarge-ondemand-1428.vpc.cloudera.com-48_executor] compactor.Worker: Deleting result directories created by the compactor:2023-01-02T02:12:55,858 INFO [impala-ec2-centos79-m6i-4xlarge-ondemand-1428.vpc.cloudera.com-48_executor] compactor.Worker: hdfs://localhost:20500/test-warehouse/managed/partial_catalog_info_test.db/insert_only_partitioned/part=1/delta_001_003_v0001827 2023-01-02T02:12:55,859 INFO