[jira] [Resolved] (HIVE-27670) Failed to build the image locally on Apple silicon
[ https://issues.apache.org/jira/browse/HIVE-27670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhihua Deng resolved HIVE-27670. Fix Version/s: 4.0.0 Resolution: Fixed Fix has been merged. Thank you [~simhadri-g], [~zratkai] and [~ayushtkn] for the reviews! > Failed to build the image locally on Apple silicon > -- > > Key: HIVE-27670 > URL: https://issues.apache.org/jira/browse/HIVE-27670 > Project: Hive > Issue Type: Sub-task >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > When using build.sh to build the image, the arg HADOOP_VERSION is empty, and > the build is failed as a consequence. > > {noformat} > => ERROR [env 5/2] RUN tar -xzvf /opt/hadoop-$HADOOP_VERSION.tar.gz -C /opt/ > && rm -rf /opt/hadoop-$HADOOP_VERSION 0.1s > -- > > [env 5/2] RUN tar -xzvf /opt/hadoop-$HADOOP_VERSION.tar.gz -C /opt/ && > rm -rf /opt/hadoop-$HADOOP_VERSION/share/doc/* && tar -xzvf > /opt/apache-hive-$HIVE_VERSION-bin.tar.gz -C /opt/ && rm -rf > /opt/apache-hive-$HIVE_VERSION-bin/jdbc/* && tar -xzvf > /opt/apache-tez-$TEZ_VERSION-bin.tar.gz -C /opt && rm -rf > /opt/apache-tez-$TEZ_VERSION-bin/share/*: > #0 0.135 tar (child): /opt/hadoop-.tar.gz: Cannot open: No such file or > directory{noformat} > This is caused by the "--build-arg"s following --build-arg > "BUILD_ENV=unarchive" weren't passed to the build command, we'd better add a > '\' at the end of "BUILD_ENV=unarchive" to get rid of the problem. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27687) Logger variable should be static final as its creation takes more time in query compilation
[ https://issues.apache.org/jira/browse/HIVE-27687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27687: -- Labels: pull-request-available (was: ) > Logger variable should be static final as its creation takes more time in > query compilation > --- > > Key: HIVE-27687 > URL: https://issues.apache.org/jira/browse/HIVE-27687 > Project: Hive > Issue Type: Task > Components: Hive >Reporter: Ramesh Kumar Thangarajan >Assignee: Ramesh Kumar Thangarajan >Priority: Major > Labels: pull-request-available > Attachments: Screenshot 2023-09-12 at 5.03.31 PM.png > > > In query compilation, > LoggerFactory.getLogger() seems to take up more time. Some of the serde > classes use non static variable for Logger that forces the getLogger() call > for each of the class creation. > Making Logger variable static final will avoid this code path for every serde > class construction. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27687) Logger variable should be static final as its creation takes more time in query compilation
[ https://issues.apache.org/jira/browse/HIVE-27687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ramesh Kumar Thangarajan updated HIVE-27687: Attachment: Screenshot 2023-09-12 at 5.03.31 PM.png > Logger variable should be static final as its creation takes more time in > query compilation > --- > > Key: HIVE-27687 > URL: https://issues.apache.org/jira/browse/HIVE-27687 > Project: Hive > Issue Type: Task > Components: Hive >Reporter: Ramesh Kumar Thangarajan >Assignee: Ramesh Kumar Thangarajan >Priority: Major > Attachments: Screenshot 2023-09-12 at 5.03.31 PM.png > > > In query compilation, > LoggerFactory.getLogger() seems to take up more time. Some of the serde > classes use non static variable for Logger that forces the getLogger() call > for each of the class creation. > Making Logger variable static final will avoid this code path for every serde > class construction. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27687) Logger variable should be static final as its creation takes more time in query compilation
Ramesh Kumar Thangarajan created HIVE-27687: --- Summary: Logger variable should be static final as its creation takes more time in query compilation Key: HIVE-27687 URL: https://issues.apache.org/jira/browse/HIVE-27687 Project: Hive Issue Type: Task Components: Hive Reporter: Ramesh Kumar Thangarajan Assignee: Ramesh Kumar Thangarajan In query compilation, LoggerFactory.getLogger() seems to take up more time. Some of the serde classes use non static variable for Logger that forces the getLogger() call for each of the class creation. Making Logger variable static final will avoid this code path for every serde class construction. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27675) Support keystore/truststore types for hive to zookeeper integration points
[ https://issues.apache.org/jira/browse/HIVE-27675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27675: -- Labels: pull-request-available (was: ) > Support keystore/truststore types for hive to zookeeper integration points > -- > > Key: HIVE-27675 > URL: https://issues.apache.org/jira/browse/HIVE-27675 > Project: Hive > Issue Type: Bug > Components: HiveServer2, JDBC, Standalone Metastore >Affects Versions: 3.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Major > Labels: pull-request-available > > In HIVE-24253, we added support for HS2/HMS/JDBC DRiver to support other > store types like BCFKS (other than JKS). This allows JDBC Clients to connect > to HS2 directly. However, with service discovery enabled, the clients have to > connect zookeeper to determine HS2 endpoints. This connectivity currently > does not support other store types. Similarly, HS2/HMS services also do not > provide ability to use different store types for the zk registration process. > {noformat} > $ beeline > Connecting to > jdbc:hive2://:2181/default;httpPath=cliservice;principal=hive/_HOST@;retries=5;serviceDiscoveryMode=zooKeeper;ssl=true;sslTrustStore=/var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_truststore.jks;transportMode=http;trustStorePassword=RoeCFK11Pq54;trustStoreType=bcfks;zooKeeperNamespace=hiveserver2 > Error: org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read > HiveServer2 configs from ZooKeeper (state=,code=0) > {noformat} > {noformat} > Opening socket connection to server :2182. Will attempt to > SASL-authenticate using Login Context section 'HiveZooKeeperClient' > 2023-08-09 13:28:07,591 WARN io.netty.channel.ChannelInitializer: > [nioEventLoopGroup-3-1]: Failed to initialize a channel. Closing: [id: > 0x0937583f] > org.apache.zookeeper.common.X509Exception$SSLContextException: Failed to > create KeyManager > at > org.apache.zookeeper.common.X509Util.createSSLContextAndOptions(X509Util.java:346) > ~[zookeeper-3.5.5.7.2.16.300-7.jar:3.5.5.7.2.16.300-7] > at > org.apache.zookeeper.common.X509Util.createSSLContext(X509Util.java:278) > ~[zookeeper-3.5.5.7.2.16.300-7.jar:3.5.5.7.2.16.300-7] > at > org.apache.zookeeper.ClientCnxnSocketNetty$ZKClientPipelineFactory.initSSL(ClientCnxnSocketNetty.java:454) > ~[zookeeper-3.5.5.7.2.16.300-7.jar:3.5.5.7.2.16.300-7] > at > org.apache.zookeeper.ClientCnxnSocketNetty$ZKClientPipelineFactory.initChannel(ClientCnxnSocketNetty.java:444) > ~[zookeeper-3.5.5.7.2.16.300-7.jar:3.5.5.7.2.16.300-7] > at > org.apache.zookeeper.ClientCnxnSocketNetty$ZKClientPipelineFactory.initChannel(ClientCnxnSocketNetty.java:429) > ~[zookeeper-3.5.5.7.2.16.300-7.jar:3.5.5.7.2.16.300-7] > at > io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:1114) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:514) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486) > [netty-transport-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174) > [netty-common-4.1.86.Final.jar:4.1.86.Final] > at > io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecu
[jira] [Commented] (HIVE-24621) TEXT and varchar datatype does not support unicode encoding in MSSQL
[ https://issues.apache.org/jira/browse/HIVE-24621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17764324#comment-17764324 ] Ayush Saxena commented on HIVE-24621: - Thanx [~gupta.nikhil0007] for the confirmation, as of now removed as blocker for hive 4.x release, but the branch is open if this lands in before that, cool, just we won't be holding back the release for this > TEXT and varchar datatype does not support unicode encoding in MSSQL > > > Key: HIVE-24621 > URL: https://issues.apache.org/jira/browse/HIVE-24621 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Nikhil Gupta >Assignee: Nikhil Gupta >Priority: Major > Labels: check > > Why Unicode is required? > In following example the Chinese character cannot be properly interpreted. > {noformat} > CREATE VIEW `test_view` AS select `test_tbl_char`.`col1` from > `test_db5`.`test_tbl_char` where `test_tbl_char`.`col1`='你好'; > show create table test_view; > ++ > | createtab_stmt | > ++ > | CREATE VIEW `test_view` AS select `test_tbl_char`.`col1` from > `test_db5`.`test_tbl_char` where `test_tbl_char`.`col1`='??' | > ++ {noformat} > > This issue comes because TBLS is defined as follows: > > CREATE TABLE TBLS > ( > TBL_ID bigint NOT NULL, > CREATE_TIME int NOT NULL, > DB_ID bigint NULL, > LAST_ACCESS_TIME int NOT NULL, > OWNER nvarchar(767) NULL, > OWNER_TYPE nvarchar(10) NULL, > RETENTION int NOT NULL, > SD_ID bigint NULL, > TBL_NAME nvarchar(256) NULL, > TBL_TYPE nvarchar(128) NULL, > VIEW_EXPANDED_TEXT text NULL, > VIEW_ORIGINAL_TEXT text NULL, > IS_REWRITE_ENABLED bit NOT NULL DEFAULT 0, > WRITE_ID bigint NOT NULL DEFAULT 0 > ); > Text data type does not support unicode encoding irrespective of collation > varchar data type does not support unicode encoding prior to SQL Server 2019. > Also UTF8 enabled Collation needs to be defined for use of unicode characters. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27672) Iceberg: Truncate partition support
[ https://issues.apache.org/jira/browse/HIVE-27672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27672: -- Labels: pull-request-available (was: ) > Iceberg: Truncate partition support > --- > > Key: HIVE-27672 > URL: https://issues.apache.org/jira/browse/HIVE-27672 > Project: Hive > Issue Type: New Feature >Reporter: Sourabh Badhya >Assignee: Sourabh Badhya >Priority: Major > Labels: pull-request-available > > Support the following truncate operations on a partition level - > {code:java} > TRUNCATE TABLE tableName PARTITION (partCol1 = partValue1, partCol2 = > partValue2);{code} > Truncate is not supported for partition transforms. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27686) Use ORC 1.8.5.
[ https://issues.apache.org/jira/browse/HIVE-27686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27686: -- Labels: pull-request-available (was: ) > Use ORC 1.8.5. > -- > > Key: HIVE-27686 > URL: https://issues.apache.org/jira/browse/HIVE-27686 > Project: Hive > Issue Type: Improvement >Reporter: Zoltán Rátkai >Assignee: Zoltán Rátkai >Priority: Major > Labels: pull-request-available > > ORC-1413 fixed a bug to use ORC row level filter, it was released in ORC > 1.8.4, so use the latest from 1.8.x -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27686) Use ORC 1.8.5.
Zoltán Rátkai created HIVE-27686: Summary: Use ORC 1.8.5. Key: HIVE-27686 URL: https://issues.apache.org/jira/browse/HIVE-27686 Project: Hive Issue Type: Improvement Reporter: Zoltán Rátkai Assignee: Zoltán Rátkai ORC-1413 fixed a bug to use ORC row level filter, it was released in ORC 1.8.4, so use the latest from 1.8.x -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Reopened] (HIVE-27685) test ticket
[ https://issues.apache.org/jira/browse/HIVE-27685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Drew Foulks reopened HIVE-27685: > test ticket > --- > > Key: HIVE-27685 > URL: https://issues.apache.org/jira/browse/HIVE-27685 > Project: Hive > Issue Type: Improvement >Reporter: Ayush Saxena >Priority: Trivial > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27685) test ticket
Ayush Saxena created HIVE-27685: --- Summary: test ticket Key: HIVE-27685 URL: https://issues.apache.org/jira/browse/HIVE-27685 Project: Hive Issue Type: Improvement Reporter: Ayush Saxena -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27685) test ticket
[ https://issues.apache.org/jira/browse/HIVE-27685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HIVE-27685: Fix Version/s: 4.0.0 > test ticket > --- > > Key: HIVE-27685 > URL: https://issues.apache.org/jira/browse/HIVE-27685 > Project: Hive > Issue Type: Improvement >Reporter: Ayush Saxena >Priority: Trivial > Fix For: 4.0.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27685) test ticket
[ https://issues.apache.org/jira/browse/HIVE-27685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena resolved HIVE-27685. - Fix Version/s: (was: 4.0.0) Resolution: Fixed > test ticket > --- > > Key: HIVE-27685 > URL: https://issues.apache.org/jira/browse/HIVE-27685 > Project: Hive > Issue Type: Improvement >Reporter: Ayush Saxena >Priority: Trivial > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27650) Oracle init-db is flaky
[ https://issues.apache.org/jira/browse/HIVE-27650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HIVE-27650: Fix Version/s: 4.0.0 > Oracle init-db is flaky > --- > > Key: HIVE-27650 > URL: https://issues.apache.org/jira/browse/HIVE-27650 > Project: Hive > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Major > Fix For: 4.0.0 > > > The oracle docker in hive precommit fails very often > eg. > http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-4578/2/pipeline/462 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27683) Incorrect result when filtering the table with default partition
[ https://issues.apache.org/jira/browse/HIVE-27683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Drew Foulks updated HIVE-27683: --- Fix Version/s: (was: 2.3.9) > Incorrect result when filtering the table with default partition > > > Key: HIVE-27683 > URL: https://issues.apache.org/jira/browse/HIVE-27683 > Project: Hive > Issue Type: Bug >Reporter: Zhihua Deng >Priority: Major > > Steps to repro: > {noformat} > create database pt; > create table pt.alterdynamic_part_table(intcol string) partitioned by > (partcol1 int, partcol2 int); > insert into table pt.alterdynamic_part_table partition(partcol1, partcol2) > select '2', 2, NULL; > select intcol from pt.alterdynamic_part_table where (partcol1=2 and > partcol2=1) or (partcol1=2 and partcol2='__HIVE_DEFAULT_PARTITION__'); > select intcol from pt.alterdynamic_part_table where > partcol2='__HIVE_DEFAULT_PARTITION__';{noformat} > The last two queries should return one line row instead of an empty result. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27683) Incorrect result when filtering the table with default partition
[ https://issues.apache.org/jira/browse/HIVE-27683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Drew Foulks updated HIVE-27683: --- Fix Version/s: 2.3.9 > Incorrect result when filtering the table with default partition > > > Key: HIVE-27683 > URL: https://issues.apache.org/jira/browse/HIVE-27683 > Project: Hive > Issue Type: Bug >Reporter: Zhihua Deng >Priority: Major > Fix For: 2.3.9 > > > Steps to repro: > {noformat} > create database pt; > create table pt.alterdynamic_part_table(intcol string) partitioned by > (partcol1 int, partcol2 int); > insert into table pt.alterdynamic_part_table partition(partcol1, partcol2) > select '2', 2, NULL; > select intcol from pt.alterdynamic_part_table where (partcol1=2 and > partcol2=1) or (partcol1=2 and partcol2='__HIVE_DEFAULT_PARTITION__'); > select intcol from pt.alterdynamic_part_table where > partcol2='__HIVE_DEFAULT_PARTITION__';{noformat} > The last two queries should return one line row instead of an empty result. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27672) Iceberg: Truncate partition support
[ https://issues.apache.org/jira/browse/HIVE-27672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sourabh Badhya updated HIVE-27672: -- Description: Support the following truncate operations on a partition level - {code:java} TRUNCATE TABLE tableName PARTITION (partCol1 = partValue1, partCol2 = partValue2);{code} Truncate is not supported for partition transforms. was: Support the following truncate operations on a partition level - {code:java} TRUNCATE TABLE tableName PARTITION (partCol1 = partValue1, partCol2 = partValue2);{code} For partition transforms other than identity, the partition column must have a suffix to the column as follows - 1. Truncate transform on 'b' column - b_trunc {code:java} TRUNCATE TABLE tableName PARTITION (b_trunc = 'xy');{code} 2. Bucket transform on 'b' column - b_bucket {code:java} TRUNCATE TABLE tableName PARTITION (b_bucket = 10);{code} 3. Year transform on 'b' column - b_year - The value should be in format. {code:java} TRUNCATE TABLE tableName PARTITION (b_year = '2022');{code} 4. Month transform on 'b' column - b_month - The value should be in -MM format {code:java} TRUNCATE TABLE tableName PARTITION (b_month = '2022-08'); {code} 5. Day transform on 'b' column - b_day - The value should be in -MM-DD format {code:java} TRUNCATE TABLE tableName PARTITION (b_day = '2022-08-07');{code} 6. Hour transform on 'b' column - b_hour - The value should be in -MM-DD-HH format. {code:java} TRUNCATE TABLE tableName PARTITION (b_hour = '2022-08-07-13'); {code} Specifying multiple conditions is also supported - {code:java} TRUNCATE TABLE tableName PARTITION (b_day = '2022-08-07', c_trunc = 'xy');{code} The motivation for specifying the inputs in the following format is based on the directory structure of the data in Iceberg tables. The input reflects the same value that are ideally seen the data directories in Iceberg tables. For table which has undergone partition evolution, truncate is possible for only identity transform and is only possible for newly added partition which are outside the lower bound and upper bound of the partition column of the existing files (files prior to partition evolution). If the newly added partition is within the lower bound and upper bound of the partition column of the existing files then performing truncate operation on the newly added partition throws a ValidationException. > Iceberg: Truncate partition support > --- > > Key: HIVE-27672 > URL: https://issues.apache.org/jira/browse/HIVE-27672 > Project: Hive > Issue Type: New Feature >Reporter: Sourabh Badhya >Assignee: Sourabh Badhya >Priority: Major > > Support the following truncate operations on a partition level - > {code:java} > TRUNCATE TABLE tableName PARTITION (partCol1 = partValue1, partCol2 = > partValue2);{code} > Truncate is not supported for partition transforms. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27666) Backport of HIVE-22903 : Vectorized row_number() resets the row number after one batch in case of constant expression in partition clause
[ https://issues.apache.org/jira/browse/HIVE-27666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-27666: Affects Version/s: 3.1.3 > Backport of HIVE-22903 : Vectorized row_number() resets the row number after > one batch in case of constant expression in partition clause > -- > > Key: HIVE-27666 > URL: https://issues.apache.org/jira/browse/HIVE-27666 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.3 >Reporter: Diksha >Assignee: Diksha >Priority: Major > Labels: pull-request-available > Fix For: 3.2.0 > > > Backport of HIVE-22903 : Vectorized row_number() resets the row number after > one batch in case of constant expression in partition clause -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27666) Backport of HIVE-22903 : Vectorized row_number() resets the row number after one batch in case of constant expression in partition clause
[ https://issues.apache.org/jira/browse/HIVE-27666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan resolved HIVE-27666. - Fix Version/s: 3.2.0 Resolution: Fixed > Backport of HIVE-22903 : Vectorized row_number() resets the row number after > one batch in case of constant expression in partition clause > -- > > Key: HIVE-27666 > URL: https://issues.apache.org/jira/browse/HIVE-27666 > Project: Hive > Issue Type: Sub-task >Reporter: Diksha >Assignee: Diksha >Priority: Major > Labels: pull-request-available > Fix For: 3.2.0 > > > Backport of HIVE-22903 : Vectorized row_number() resets the row number after > one batch in case of constant expression in partition clause -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27666) Backport of HIVE-22903 : Vectorized row_number() resets the row number after one batch in case of constant expression in partition clause
[ https://issues.apache.org/jira/browse/HIVE-27666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27666: -- Labels: pull-request-available (was: ) > Backport of HIVE-22903 : Vectorized row_number() resets the row number after > one batch in case of constant expression in partition clause > -- > > Key: HIVE-27666 > URL: https://issues.apache.org/jira/browse/HIVE-27666 > Project: Hive > Issue Type: Sub-task >Reporter: Diksha >Assignee: Diksha >Priority: Major > Labels: pull-request-available > > Backport of HIVE-22903 : Vectorized row_number() resets the row number after > one batch in case of constant expression in partition clause -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-24621) TEXT and varchar datatype does not support unicode encoding in MSSQL
[ https://issues.apache.org/jira/browse/HIVE-24621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17764118#comment-17764118 ] Nikhil Gupta commented on HIVE-24621: - [~ayushtkn] , It is not a regression. This was never there in earlier versions. But for global support and accessibility this is important. > TEXT and varchar datatype does not support unicode encoding in MSSQL > > > Key: HIVE-24621 > URL: https://issues.apache.org/jira/browse/HIVE-24621 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Nikhil Gupta >Assignee: Nikhil Gupta >Priority: Major > Labels: check > > Why Unicode is required? > In following example the Chinese character cannot be properly interpreted. > {noformat} > CREATE VIEW `test_view` AS select `test_tbl_char`.`col1` from > `test_db5`.`test_tbl_char` where `test_tbl_char`.`col1`='你好'; > show create table test_view; > ++ > | createtab_stmt | > ++ > | CREATE VIEW `test_view` AS select `test_tbl_char`.`col1` from > `test_db5`.`test_tbl_char` where `test_tbl_char`.`col1`='??' | > ++ {noformat} > > This issue comes because TBLS is defined as follows: > > CREATE TABLE TBLS > ( > TBL_ID bigint NOT NULL, > CREATE_TIME int NOT NULL, > DB_ID bigint NULL, > LAST_ACCESS_TIME int NOT NULL, > OWNER nvarchar(767) NULL, > OWNER_TYPE nvarchar(10) NULL, > RETENTION int NOT NULL, > SD_ID bigint NULL, > TBL_NAME nvarchar(256) NULL, > TBL_TYPE nvarchar(128) NULL, > VIEW_EXPANDED_TEXT text NULL, > VIEW_ORIGINAL_TEXT text NULL, > IS_REWRITE_ENABLED bit NOT NULL DEFAULT 0, > WRITE_ID bigint NOT NULL DEFAULT 0 > ); > Text data type does not support unicode encoding irrespective of collation > varchar data type does not support unicode encoding prior to SQL Server 2019. > Also UTF8 enabled Collation needs to be defined for use of unicode characters. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27567) Support building multi-platform images
[ https://issues.apache.org/jira/browse/HIVE-27567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simhadri Govindappa resolved HIVE-27567. Resolution: Fixed > Support building multi-platform images > -- > > Key: HIVE-27567 > URL: https://issues.apache.org/jira/browse/HIVE-27567 > Project: Hive > Issue Type: Sub-task >Reporter: Zhihua Deng >Assignee: Simhadri Govindappa >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27567) Support building multi-platform images
[ https://issues.apache.org/jira/browse/HIVE-27567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17764104#comment-17764104 ] Simhadri Govindappa commented on HIVE-27567: Fixed in HIVE-27277 , from hive 4.00-beta-1 hive docker image supports both arm64 and amd64 platforms. https://hub.docker.com/r/apache/hive/tags > Support building multi-platform images > -- > > Key: HIVE-27567 > URL: https://issues.apache.org/jira/browse/HIVE-27567 > Project: Hive > Issue Type: Sub-task >Reporter: Zhihua Deng >Assignee: Simhadri Govindappa >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-27674) Misson union subdir should be ignored in some cases
[ https://issues.apache.org/jira/browse/HIVE-27674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17764039#comment-17764039 ] László Bodor edited comment on HIVE-27674 at 9/12/23 7:20 AM: -- the issue I was working on was fixed as part of HIVE-24682 I was reproducing this on a downstream custom hive version, but I was not able to reproduce it upstream then I found that it was fixed by changes in Utilities class in HIVE-24682, more specifically: https://github.com/apache/hive/commit/2f2b7a165cdc341391c3ec049c0668ce9eb6db58#diff-44b2ff3a3c4a6cfcaed0fcb40b74031844f8586e40a6f8261637e5ebcd558b73R4501-R4511 without this change above, files ended up a non-empty collection: {code} Path[] files = null; if (!isInsertOverwrite || dpLevels == 0 || !dynamicPartitionSpecs.isEmpty()) { files = getDirectInsertDirectoryCandidates( fs, specPath, dpLevels, filter, writeId, stmtId, hconf, isInsertOverwrite, acidOperation); } {code} hence directInsertDirectories became a non-empty collection too: {code} ArrayList directInsertDirectories = new ArrayList<>(); if (files != null) { for (Path path : files) { Utilities.FILE_OP_LOGGER.info("Looking at path: {}", path); directInsertDirectories.add(path); } } {code} {code} [file:/Users/laszlobodor/CDH/hive/itests/qtest/target/localfs/warehouse/lbodor_test2/dt=20230817/base_001] {code} so when this method called with unionSuffix=HIVE_UNION_SUBDIR_1, which doesn't exist, we hit this codepath, which is as problem: {code} if (!directInsertDirectories.isEmpty()) { cleanDirectInsertDirectoriesConcurrently(directInsertDirectories, committed, fs, hconf, unionSuffix, lbLevels); } {code} my PR here was about to be more lenient about that scenario, but actually it just covered up an earlier problem, which has been fixed by HIVE-24682 was (Author: abstractdog): the issue I was working on was fixed as part of HIVE-24682 > Misson union subdir should be ignored in some cases > --- > > Key: HIVE-27674 > URL: https://issues.apache.org/jira/browse/HIVE-27674 > Project: Hive > Issue Type: Bug >Reporter: László Bodor >Assignee: László Bodor >Priority: Major > Labels: pull-request-available > > when a union job creates files only in specific subdirs, this can happen: > {code} > ERROR : Job Commit failed with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException: > File > hdfs://c3857-node3.coelab.cloudera.com:8020/warehouse/tablespace/managed/hive/lbodor_test2/dt=20230817/base_001/HIVE_UNION_SUBDIR_1 > does not exist.)' > org.apache.hadoop.hive.ql.metadata.HiveException: > java.io.FileNotFoundException: File > hdfs://c3857-node3.coelab.cloudera.com:8020/warehouse/tablespace/managed/hive/lbodor_test2/dt=20230817/base_001/HIVE_UNION_SUBDIR_1 > does not exist. > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1528) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:797) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:646) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:344) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) > at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357) > at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) > at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) > at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:770) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:504) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:498) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:229) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:329) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.
[jira] [Resolved] (HIVE-27674) Misson union subdir should be ignored in some cases
[ https://issues.apache.org/jira/browse/HIVE-27674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] László Bodor resolved HIVE-27674. - Resolution: Invalid > Misson union subdir should be ignored in some cases > --- > > Key: HIVE-27674 > URL: https://issues.apache.org/jira/browse/HIVE-27674 > Project: Hive > Issue Type: Bug >Reporter: László Bodor >Assignee: László Bodor >Priority: Major > Labels: pull-request-available > > when a union job creates files only in specific subdirs, this can happen: > {code} > ERROR : Job Commit failed with exception > 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException: > File > hdfs://c3857-node3.coelab.cloudera.com:8020/warehouse/tablespace/managed/hive/lbodor_test2/dt=20230817/base_001/HIVE_UNION_SUBDIR_1 > does not exist.)' > org.apache.hadoop.hive.ql.metadata.HiveException: > java.io.FileNotFoundException: File > hdfs://c3857-node3.coelab.cloudera.com:8020/warehouse/tablespace/managed/hive/lbodor_test2/dt=20230817/base_001/HIVE_UNION_SUBDIR_1 > does not exist. > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1528) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:797) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) > at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:646) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:344) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) > at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357) > at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) > at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) > at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:770) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:504) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:498) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:229) > at > org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:329) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) > at > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:347) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.FileNotFoundException: File > hdfs://c3857-node3.coelab.cloudera.com:8020/warehouse/tablespace/managed/hive/lbodor_test2/dt=20230817/base_001/HIVE_UNION_SUBDIR_1 > does not exist. > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1097) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:145) > at > org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1168) > at > org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1165) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1175) > at > org.apache.hadoop.hive.ql.exec.Utilities.removeTempOrDuplicateFiles(Utilities.java:1794) > at > org.apache.hadoop.hive.ql.exec.Utilities.handleDirectInsertTableFinalPath(Utilities.java:4579) > at > org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:
[jira] [Created] (HIVE-27684) Bump org.slf4j:slf4j-api to 2.0.9
Akshat Mathur created HIVE-27684: Summary: Bump org.slf4j:slf4j-api to 2.0.9 Key: HIVE-27684 URL: https://issues.apache.org/jira/browse/HIVE-27684 Project: Hive Issue Type: Improvement Affects Versions: 4.0.0-beta-1 Reporter: Akshat Mathur Assignee: Akshat Mathur Upgrading to latest version -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27674) Misson union subdir should be ignored in some cases
[ https://issues.apache.org/jira/browse/HIVE-27674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] László Bodor updated HIVE-27674: Description: when a union job creates files only in specific subdirs, this can happen: {code} ERROR : Job Commit failed with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.FileNotFoundException: File hdfs://c3857-node3.coelab.cloudera.com:8020/warehouse/tablespace/managed/hive/lbodor_test2/dt=20230817/base_001/HIVE_UNION_SUBDIR_1 does not exist.)' org.apache.hadoop.hive.ql.metadata.HiveException: java.io.FileNotFoundException: File hdfs://c3857-node3.coelab.cloudera.com:8020/warehouse/tablespace/managed/hive/lbodor_test2/dt=20230817/base_001/HIVE_UNION_SUBDIR_1 does not exist. at org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1528) at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:797) at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) at org.apache.hadoop.hive.ql.exec.Operator.jobClose(Operator.java:802) at org.apache.hadoop.hive.ql.exec.tez.TezTask.close(TezTask.java:646) at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:344) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357) at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:770) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:504) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:498) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:229) at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:91) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:329) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:347) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.FileNotFoundException: File hdfs://c3857-node3.coelab.cloudera.com:8020/warehouse/tablespace/managed/hive/lbodor_test2/dt=20230817/base_001/HIVE_UNION_SUBDIR_1 does not exist. at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1097) at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:145) at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1168) at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1165) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1175) at org.apache.hadoop.hive.ql.exec.Utilities.removeTempOrDuplicateFiles(Utilities.java:1794) at org.apache.hadoop.hive.ql.exec.Utilities.handleDirectInsertTableFinalPath(Utilities.java:4579) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.jobCloseOp(FileSinkOperator.java:1522) ... 31 more {code} please find repro in PR > Misson union subdir should be ignored in some cases > --- > > Key: HIVE-27674 > URL: https://issues.apache.org/jira/browse/HIVE-27674 > Project: Hive > Issue Type: Bug >Reporter: László Bodor >Assignee: László Bodor >Priority: Major > Labels: pull-request-available > > when a union job
[jira] [Commented] (HIVE-27674) Misson union subdir should be ignored in some cases
[ https://issues.apache.org/jira/browse/HIVE-27674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17764039#comment-17764039 ] László Bodor commented on HIVE-27674: - the issue I was working on was fixed as part of HIVE-24682 > Misson union subdir should be ignored in some cases > --- > > Key: HIVE-27674 > URL: https://issues.apache.org/jira/browse/HIVE-27674 > Project: Hive > Issue Type: Bug >Reporter: László Bodor >Assignee: László Bodor >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27388) Backport of HIVE-23058: Compaction task reattempt fails with FileAlreadyExistsException
[ https://issues.apache.org/jira/browse/HIVE-27388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-27388: Affects Version/s: 3.1.3 > Backport of HIVE-23058: Compaction task reattempt fails with > FileAlreadyExistsException > --- > > Key: HIVE-27388 > URL: https://issues.apache.org/jira/browse/HIVE-27388 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.3 >Reporter: Diksha >Assignee: Diksha >Priority: Major > Labels: pull-request-available > Fix For: 3.2.0 > > > Backport of HIVE-23058: Compaction task reattempt fails with > FileAlreadyExistsException -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27388) Backport of HIVE-23058: Compaction task reattempt fails with FileAlreadyExistsException
[ https://issues.apache.org/jira/browse/HIVE-27388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan resolved HIVE-27388. - Fix Version/s: 3.2.0 Resolution: Fixed > Backport of HIVE-23058: Compaction task reattempt fails with > FileAlreadyExistsException > --- > > Key: HIVE-27388 > URL: https://issues.apache.org/jira/browse/HIVE-27388 > Project: Hive > Issue Type: Sub-task >Reporter: Diksha >Assignee: Diksha >Priority: Major > Labels: pull-request-available > Fix For: 3.2.0 > > > Backport of HIVE-23058: Compaction task reattempt fails with > FileAlreadyExistsException -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27682) AlterTableAlterPartitionOperation cannot change the column type if the table has default partition
[ https://issues.apache.org/jira/browse/HIVE-27682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27682: -- Labels: pull-request-available (was: ) > AlterTableAlterPartitionOperation cannot change the column type if the table > has default partition > -- > > Key: HIVE-27682 > URL: https://issues.apache.org/jira/browse/HIVE-27682 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Minor > Labels: pull-request-available > > Steps to repro the case : > {noformat} > create database pt; > create table pt.alterdynamic_part_table(intcol string) partitioned by > (partcol1 string, partcol2 string); > insert into table pt.alterdynamic_part_table partition(partcol1, partcol2) > select NULL, '2', NULL; > alter table pt.alterdynamic_part_table partition column (partcol2 > int);{noformat} > Exception is thrown: > {noformat} > org.apache.hadoop.hive.ql.metadata.HiveException: Exception while checking > type conversion of existing partition values to FieldSchema(name:partcol2, > type:int, comment:null) : Exception while converting string to int for value > : NULL > at > org.apache.hadoop.hive.ql.ddl.table.partition.alter.AlterTableAlterPartitionOperation.check(AlterTableAlterPartitionOperation.java:69) > at > org.apache.hadoop.hive.ql.ddl.table.partition.alter.AlterTableAlterPartitionOperation.execute(AlterTableAlterPartitionOperation.java:55){noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)