[jira] [Created] (HIVE-27635) Refactor GenReduceSinkPlan out of SemanticAnalyzer
Steve Carlin created HIVE-27635: --- Summary: Refactor GenReduceSinkPlan out of SemanticAnalyzer Key: HIVE-27635 URL: https://issues.apache.org/jira/browse/HIVE-27635 Project: Hive Issue Type: Sub-task Components: HiveServer2 Reporter: Steve Carlin The goal for this task is to remove the genReduceSinkPlan() out of SemanticAnalyzer The new class should be Immutable and should not mutate any objects within SemanticAnalyzer -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27635) Refactor GenReduceSinkPlan out of SemanticAnalyzer
[ https://issues.apache.org/jira/browse/HIVE-27635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Carlin reassigned HIVE-27635: --- Assignee: Steve Carlin > Refactor GenReduceSinkPlan out of SemanticAnalyzer > -- > > Key: HIVE-27635 > URL: https://issues.apache.org/jira/browse/HIVE-27635 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Reporter: Steve Carlin >Assignee: Steve Carlin >Priority: Major > > The goal for this task is to remove the genReduceSinkPlan() out of > SemanticAnalyzer > The new class should be Immutable and should not mutate any objects within > SemanticAnalyzer -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27634) Refactor SemanticAnalyzer
[ https://issues.apache.org/jira/browse/HIVE-27634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Carlin reassigned HIVE-27634: --- Assignee: Steve Carlin > Refactor SemanticAnalyzer > - > > Key: HIVE-27634 > URL: https://issues.apache.org/jira/browse/HIVE-27634 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Reporter: Steve Carlin >Assignee: Steve Carlin >Priority: Major > > The BaseSemanticAnalyzer/SemanticAnalyzer/CalcitePlanner object is 25,000 > lines. SemanticAnalyzer by itself is over 15,000 lines long > > That's a weee bit too large. Let's do better. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27634) Refactor SemanticAnalyzer
Steve Carlin created HIVE-27634: --- Summary: Refactor SemanticAnalyzer Key: HIVE-27634 URL: https://issues.apache.org/jira/browse/HIVE-27634 Project: Hive Issue Type: Improvement Components: HiveServer2 Reporter: Steve Carlin The BaseSemanticAnalyzer/SemanticAnalyzer/CalcitePlanner object is 25,000 lines. SemanticAnalyzer by itself is over 15,000 lines long That's a weee bit too large. Let's do better. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27602) Backport of HIVE-21915: Hive with TEZ UNION ALL and UDTF results in data loss
[ https://issues.apache.org/jira/browse/HIVE-27602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan resolved HIVE-27602. - Fix Version/s: 3.2.0 Resolution: Fixed > Backport of HIVE-21915: Hive with TEZ UNION ALL and UDTF results in data loss > - > > Key: HIVE-27602 > URL: https://issues.apache.org/jira/browse/HIVE-27602 > Project: Hive > Issue Type: Sub-task >Reporter: Aman Raj >Assignee: Aman Raj >Priority: Major > Labels: pull-request-available > Fix For: 3.2.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-22961) Drop function in Hive should not send request for drop database to Ranger plugin.
[ https://issues.apache.org/jira/browse/HIVE-22961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-22961: -- Labels: pull-request-available (was: ) > Drop function in Hive should not send request for drop database to Ranger > plugin. > -- > > Key: HIVE-22961 > URL: https://issues.apache.org/jira/browse/HIVE-22961 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Sam An >Assignee: Riju Trivedi >Priority: Major > Labels: pull-request-available > > Issue here is how HIVE sends the "HivePrivilegeObjects" to Ranger when DROP > fUNTION is done. This is different from how DROP TABLE is done. > DROP TABLE the following is the request: > {code:java} > 'checkPrivileges':{'hiveOpType':DROPTABLE, > 'inputHObjs':['HivePrivilegeObject':{'type':TABLE_OR_VIEW, 'dbName':testdemo, > 'objectType':TABLE_OR_VIEW, 'objectName':t1, 'columns':[], 'partKeys':[], > 'commandParams':[], 'actionType':OTHER, 'owner':systest}], > 'outputHObjs':['HivePrivilegeObject':{'type':TABLE_OR_VIEW, > 'dbName':testdemo, 'objectType':TABLE_OR_VIEW, 'objectName':t1, 'columns':[], > 'partKeys':[], 'commandParams':[], 'actionType':OTHER, 'owner':systest}], > 'context':{'clientType':HIVESERVER2, 'commandString':drop table t1, > 'ipAddress':10.65.42.125, 'forwardedAddresses':null, > 'sessionString':58f89a16-2df5-4124-af0e-913aabbefe06}, 'user':systest, > 'groups':[systest, wheel]}{code} > Where as in DROP FUNCTION: > {code:java} > {'hiveOpType':DROPFUNCTION, > 'inputHObjs':['HivePrivilegeObject':{'type':FUNCTION, 'dbName':udfdemo, > 'objectType':FUNCTION, 'objectName':aes1, 'columns':[], 'partKeys':[], > 'commandParams':[], 'actionType':OTHER, 'owner':null}], > 'outputHObjs':['HivePrivilegeObject':{'type':DATABASE, 'dbName':udfdemo, > 'objectType':DATABASE, 'objectName':null, 'columns':[], 'partKeys':[], > 'commandParams':[], 'actionType':OTHER, > 'owner':systest},'HivePrivilegeObject':{'type':FUNCTION, 'dbName':udfdemo, > 'objectType':FUNCTION, 'objectName':aes1, 'columns':[], 'partKeys':[], > 'commandParams':[], 'actionType':OTHER, 'owner':null}], > 'context':{'clientType':HIVESERVER2, 'commandString':drop function > udfdemo.aes1, 'ipAddress':10.65.42.125, 'forwardedAddresses':null, > 'sessionString':442ca4d3-f34a-470c-878a-18542b99016c}, 'user':systest, > 'groups':[systest, wheel]} > {code} > in DROP function in outputHObjs, there is this addition, DATABASE object > which should not there and this causes the Ranger requested to be generated > differently. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27241) insert queries failing for iceberg table stored with orc using zstd compression codec.
[ https://issues.apache.org/jira/browse/HIVE-27241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltán Rátkai resolved HIVE-27241. -- Release Note: This issue does not exist in hive upstream. Resolution: Not A Problem > insert queries failing for iceberg table stored with orc using zstd > compression codec. > -- > > Key: HIVE-27241 > URL: https://issues.apache.org/jira/browse/HIVE-27241 > Project: Hive > Issue Type: Bug > Components: Iceberg integration >Reporter: Dharmik Thakkar >Priority: Major > > insert queries failing for iceberg table stored with orc using zstd > compression codec. > {code:java} > create table test_dt (id int) stored by iceberg stored as orc > tblproperties('write.orc.compression-codec'='zstd'); > insert into test_dt values (1); {code} > {code:java} > Error while compiling statement: FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, > vertexId=vertex_1681195782720_0001_2_00, diagnostics=[Task failed, > taskId=task_1681195782720_0001_2_00_00, diagnostics=[TaskAttempt 0 > failed, info=[Error: Error while running task ( failure ) : > attempt_1681195782720_0001_2_00_00_0:java.lang.RuntimeException: > java.lang.RuntimeException: java.lang.NoClassDefFoundError: > io/airlift/compress/zstd/ZstdCompressor at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280) at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70) > at java.base/java.security.AccessController.doPrivileged(Native Method) at > java.base/javax.security.auth.Subject.doAs(Subject.java:423) at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at > org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118) > at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:829) Caused by: > java.lang.RuntimeException: java.lang.NoClassDefFoundError: > io/airlift/compress/zstd/ZstdCompressor at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:437) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:297) > ... 15 more Caused by: java.lang.NoClassDefFoundError: > io/airlift/compress/zstd/ZstdCompressor at > org.apache.hive.iceberg.org.apache.orc.impl.WriterImpl.createCodec(WriterImpl.java:281) > at > org.apache.hive.iceberg.org.apache.orc.impl.OrcCodecPool.getCodec(OrcCodecPool.java:56) > at > org.apache.hive.iceberg.org.apache.orc.impl.PhysicalFsWriter.(PhysicalFsWriter.java:116) > at > org.apache.hive.iceberg.org.apache.orc.impl.PhysicalFsWriter.(PhysicalFsWriter.java:94) > at > org.apache.hive.iceberg.org.apache.orc.impl.WriterImpl.(WriterImpl.java:220) > at > org.apache.hive.iceberg.org.apache.orc.OrcFile.createWriter(OrcFile.java:1010) > at > org.apache.iceberg.orc.OrcFileAppender.newOrcWriter(OrcFileAppender.java:171) > at org.apache.iceberg.orc.OrcFileAppender.(OrcFileAppender.java:90) at > org.apache.iceberg.orc.ORC$WriteBuilder.build(ORC.java:210) at > org.apache.iceberg.orc.ORC$DataWriteBuilder.build(ORC.java:405) at > org.apache.iceberg.data.BaseFileWriterFactory.newDataWriter(BaseFileWriterFactory.java:136) > at > org.apache.iceberg.io.RollingDataWriter.newWriter(RollingDataWriter.java:49) > at > org.apache.iceberg.io.RollingDataWriter.newWriter(RollingDataWriter.java:33) > at > org.apache.iceberg.io.RollingFileWriter.openCurrentWriter(RollingFileWriter.java:104) > at org.apache.iceberg.io.RollingDataWriter.(RollingDataWriter.java:44) > at > org.apache.iceberg.io.ClusteredDataWriter.newWriter(ClusteredDataWriter.java:51) >
[jira] [Updated] (HIVE-27633) HMS: MTable to Table process reduces view related SQL
[ https://issues.apache.org/jira/browse/HIVE-27633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27633: -- Labels: pull-request-available (was: ) > HMS: MTable to Table process reduces view related SQL > - > > Key: HIVE-27633 > URL: https://issues.apache.org/jira/browse/HIVE-27633 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: dzcxzl >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27633) HMS: MTable to Table process reduces view related SQL
dzcxzl created HIVE-27633: - Summary: HMS: MTable to Table process reduces view related SQL Key: HIVE-27633 URL: https://issues.apache.org/jira/browse/HIVE-27633 Project: Hive Issue Type: Improvement Components: Metastore Reporter: dzcxzl -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27578) Refactor genJoinRelNode to use genAllRexNode instead of genAllExprNodeDesc
[ https://issues.apache.org/jira/browse/HIVE-27578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stamatis Zampetakis updated HIVE-27578: --- Fix Version/s: 4.0.0 > Refactor genJoinRelNode to use genAllRexNode instead of genAllExprNodeDesc > -- > > Key: HIVE-27578 > URL: https://issues.apache.org/jira/browse/HIVE-27578 > Project: Hive > Issue Type: Improvement >Reporter: Soumyakanti Das >Assignee: Soumyakanti Das >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Currently {{genJoinRelNode}} method relies on {{genAllExprNodeDesc}} for > adding backticks to the ON clause conditions, but we can use > {{genAllRexNode}} method instead, and not rely on ExprNodes. > There was a previous effort to try to get RexNodes directly from AST, and > this method call was probably overlooked. We can see that changes were made > around this method call to use RexNodes instead of ExprNodes, > [here|https://github.com/apache/hive/pull/970/files#diff-fc58b141b1cc612eb221bb781c83e1a5c98e054790b2803be60b4842d0e9a5d9R2753]. > > Relevant previous Jiras: > # HIVE-23100 > # HIVE-22746 > With this change, we can avoid going through the method > {{genAllExprNodeDesc}} and avoid mixing RexNodes and ExprNodes. -- This message was sent by Atlassian Jira (v8.20.10#820010)