[jira] [Created] (HIVE-27689) Iceberg: Remove unsed iceberg property
zhangbutao created HIVE-27689: - Summary: Iceberg: Remove unsed iceberg property Key: HIVE-27689 URL: https://issues.apache.org/jira/browse/HIVE-27689 Project: Hive Issue Type: Improvement Components: Iceberg integration Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27689) Iceberg: Remove unsed iceberg property
[ https://issues.apache.org/jira/browse/HIVE-27689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhangbutao reassigned HIVE-27689: - Assignee: zhangbutao > Iceberg: Remove unsed iceberg property > -- > > Key: HIVE-27689 > URL: https://issues.apache.org/jira/browse/HIVE-27689 > Project: Hive > Issue Type: Improvement > Components: Iceberg integration >Reporter: zhangbutao >Assignee: zhangbutao >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27665) Change Filter Parser on HMS to allow backticks
[ https://issues.apache.org/jira/browse/HIVE-27665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764944#comment-17764944 ] Yuming Wang commented on HIVE-27665: PR: https://github.com/apache/hive/pull/4667 > Change Filter Parser on HMS to allow backticks > -- > > Key: HIVE-27665 > URL: https://issues.apache.org/jira/browse/HIVE-27665 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Steve Carlin >Assignee: Steve Carlin >Priority: Major > > The ParititonFilter parser on HMS does not allow backticks. This is > currently causing for a customer that has a column name of 'date' which is a > keyword. > There is more work to be done if we want the HS2 client to support filters > with backticked columns, but that will be done in a different Jira -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27309) Large number of partitions and small files causes OOM in query coordinator
[ https://issues.apache.org/jira/browse/HIVE-27309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764711#comment-17764711 ] Denys Kuzmenko commented on HIVE-27309: --- Merged to master [~difin], thanks for the patch, and [~zhangbutao] for the review! > Large number of partitions and small files causes OOM in query coordinator > -- > > Key: HIVE-27309 > URL: https://issues.apache.org/jira/browse/HIVE-27309 > Project: Hive > Issue Type: Improvement > Components: Iceberg integration >Reporter: Rajesh Balamohan >Assignee: Dmitriy Fingerman >Priority: Major > Labels: pull-request-available > > When large number of nested partitions (with small files) are read, AM bails > out with OOM. > {noformat} > CREATE EXTERNAL TABLE `store_sales_delete_6`( > `ss_sold_time_sk` int, > `ss_item_sk` int, > `ss_customer_sk` int, > `ss_cdemo_sk` int, > `ss_hdemo_sk` int, > `ss_addr_sk` int, > `ss_store_sk` int, > `ss_promo_sk` int, > `ss_ticket_number` bigint, > `ss_quantity` int, > `ss_wholesale_cost` decimal(7,2), > `ss_list_price` decimal(7,2), > `ss_sales_price` decimal(7,2), > `ss_ext_discount_amt` decimal(7,2), > `ss_ext_sales_price` decimal(7,2), > `ss_ext_wholesale_cost` decimal(7,2), > `ss_ext_list_price` decimal(7,2), > `ss_ext_tax` decimal(7,2), > `ss_coupon_amt` decimal(7,2), > `ss_net_paid` decimal(7,2), > `ss_net_paid_inc_tax` decimal(7,2), > `ss_net_profit` decimal(7,2), > `ss_sold_date_sk` int) > PARTITIONED BY SPEC ( > ss_store_sk, ss_promo_sk, ss_sold_date_sk) STORED by iceberg LOCATION > 's3a://blah/blah/tablespace/external/hive/blah.db/store_sales_delete_6'; > alter table store_sales_delete_6 set > tblproperties('format'='iceberg/parquet'); > alter table store_sales_delete_6 set > tblproperties('format-version'='2');insert into store_sales_delete_6 select * > from tpcds_1000_update.ssv limit 10;; > select count(*) from store_sales_delete_6; > {noformat} > Now, select count query throws OOM in query AM. This query generates 100,000 > splits which are grouped together into 41 splits. But streaming this and > sending as events throws OOM. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27309) Large number of partitions and small files causes OOM in query coordinator
[ https://issues.apache.org/jira/browse/HIVE-27309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denys Kuzmenko resolved HIVE-27309. --- Fix Version/s: 4.0.0 Resolution: Fixed > Large number of partitions and small files causes OOM in query coordinator > -- > > Key: HIVE-27309 > URL: https://issues.apache.org/jira/browse/HIVE-27309 > Project: Hive > Issue Type: Improvement > Components: Iceberg integration >Reporter: Rajesh Balamohan >Assignee: Dmitriy Fingerman >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > When large number of nested partitions (with small files) are read, AM bails > out with OOM. > {noformat} > CREATE EXTERNAL TABLE `store_sales_delete_6`( > `ss_sold_time_sk` int, > `ss_item_sk` int, > `ss_customer_sk` int, > `ss_cdemo_sk` int, > `ss_hdemo_sk` int, > `ss_addr_sk` int, > `ss_store_sk` int, > `ss_promo_sk` int, > `ss_ticket_number` bigint, > `ss_quantity` int, > `ss_wholesale_cost` decimal(7,2), > `ss_list_price` decimal(7,2), > `ss_sales_price` decimal(7,2), > `ss_ext_discount_amt` decimal(7,2), > `ss_ext_sales_price` decimal(7,2), > `ss_ext_wholesale_cost` decimal(7,2), > `ss_ext_list_price` decimal(7,2), > `ss_ext_tax` decimal(7,2), > `ss_coupon_amt` decimal(7,2), > `ss_net_paid` decimal(7,2), > `ss_net_paid_inc_tax` decimal(7,2), > `ss_net_profit` decimal(7,2), > `ss_sold_date_sk` int) > PARTITIONED BY SPEC ( > ss_store_sk, ss_promo_sk, ss_sold_date_sk) STORED by iceberg LOCATION > 's3a://blah/blah/tablespace/external/hive/blah.db/store_sales_delete_6'; > alter table store_sales_delete_6 set > tblproperties('format'='iceberg/parquet'); > alter table store_sales_delete_6 set > tblproperties('format-version'='2');insert into store_sales_delete_6 select * > from tpcds_1000_update.ssv limit 10;; > select count(*) from store_sales_delete_6; > {noformat} > Now, select count query throws OOM in query AM. This query generates 100,000 > splits which are grouped together into 41 splits. But streaming this and > sending as events throws OOM. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27648) CREATE TABLE with CHECK constraint fails with SemanticException
[ https://issues.apache.org/jira/browse/HIVE-27648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa resolved HIVE-27648. --- Resolution: Fixed Merged to master. Thanks [~dkuzmenko] and [~soumyakanti.das] for review. > CREATE TABLE with CHECK constraint fails with SemanticException > --- > > Key: HIVE-27648 > URL: https://issues.apache.org/jira/browse/HIVE-27648 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Soumyakanti Das >Assignee: Krisztian Kasa >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > When we run: > {code:java} > create table test ( > col1 int, > `col 2` int check (`col 2` > 10) enable novalidate rely, > constraint check_constraint check (col1 + `col 2` > 15) enable novalidate > rely > ); > {code} > It fails with: > > {code:java} > org.apache.hadoop.hive.ql.parse.SemanticException: Invalid Constraint syntax > Invalid CHECK constraint expression: col 2 > 10. > at > org.apache.hadoop.hive.ql.ddl.table.constraint.ConstraintsUtils.validateCheckConstraint(ConstraintsUtils.java:462) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:13839) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:12618) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12787) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:467) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) > at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224) > at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:519) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:471) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:436) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:430) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:121) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:227) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:257) > at org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:201) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:127) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:425) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:356) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:733) > at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:703) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:115) > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157) > at > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:62) > {code} > > I noticed while debugging that the check constraint expression in > [cc.getCheck_expression()|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/ConstraintsUtils.java#L446] > doesn't include the backticks (`), and this results in wrong token > generation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27648) CREATE TABLE with CHECK constraint fails with SemanticException
[ https://issues.apache.org/jira/browse/HIVE-27648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-27648: -- Fix Version/s: 4.0.0 > CREATE TABLE with CHECK constraint fails with SemanticException > --- > > Key: HIVE-27648 > URL: https://issues.apache.org/jira/browse/HIVE-27648 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Soumyakanti Das >Assignee: Krisztian Kasa >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > When we run: > {code:java} > create table test ( > col1 int, > `col 2` int check (`col 2` > 10) enable novalidate rely, > constraint check_constraint check (col1 + `col 2` > 15) enable novalidate > rely > ); > {code} > It fails with: > > {code:java} > org.apache.hadoop.hive.ql.parse.SemanticException: Invalid Constraint syntax > Invalid CHECK constraint expression: col 2 > 10. > at > org.apache.hadoop.hive.ql.ddl.table.constraint.ConstraintsUtils.validateCheckConstraint(ConstraintsUtils.java:462) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:13839) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:12618) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12787) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:467) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) > at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224) > at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:519) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:471) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:436) > at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:430) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:121) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:227) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:257) > at org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:201) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:127) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:425) > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:356) > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:733) > at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:703) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:115) > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157) > at > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:62) > {code} > > I noticed while debugging that the check constraint expression in > [cc.getCheck_expression()|https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/ConstraintsUtils.java#L446] > doesn't include the backticks (`), and this results in wrong token > generation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27517) SessionState is not correctly initialized when hive.security.authorization.createtable.group.grants is set to automatically grant privileges
[ https://issues.apache.org/jira/browse/HIVE-27517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27517: -- Labels: pull-request-available (was: ) > SessionState is not correctly initialized when > hive.security.authorization.createtable.group.grants is set to automatically > grant privileges > > > Key: HIVE-27517 > URL: https://issues.apache.org/jira/browse/HIVE-27517 > Project: Hive > Issue Type: Bug >Reporter: ConfX >Priority: Critical > Labels: pull-request-available > Attachments: reproduce.sh > > > h2. What happened: > When set {{hive.security.authorization.createtable.group.grants}} to some > value, the grant may not be able to successfully apply to specified groups > due to incorrect {{SessionState}} initialization and crashes the system. > h2. Buggy code: > When call {{getAuthenticator()}} method from {{SessionState}} class, it first > executes {{{}setupAuth(){}}}, which setup authentication and authorization > plugins for this session. > {noformat} > /** > * Setup authentication and authorization plugins for this session. > */ > private synchronized void setupAuth() { > ... > // create the create table grants with new config > createTableGrants = CreateTableAutomaticGrant.create(sessionConf); > ... > }{noformat} > In the table grants creation, the {{sessionConf}} sets group grant with > {{{}getGrantMap(){}}}. This method will validate privilege with > {{getPrivilege}} method and eventually {{getPrivilegeFromRegistry}} method > will be executed. > {noformat} > private static Privilege getPrivilegeFromRegistry(PrivilegeType ptype) { > return SessionState.get().isAuthorizationModeV2() ? RegistryV2.get(ptype) > : Registry.get(ptype); > }{noformat} > However, {{ SessionState.get()}} can be null because the state may not be > correctly initialized. > In {{{}SessionState.java{}}}, {{get()}} method returns > {{{}tss.get().state{}}}. If the current thread does not have SessionStates > initialized, then {{get()}} will try to create a new SessionStates by calling > {{initialValue()}} below. This calls the default constructor of the > {{SessionSatets}} class, which does not initialize the {{SessionState}} field > and {{HiveConf}} field. > {noformat} > /** > * get the current session. > */ > public static SessionState get() { > return tss.get().state; > }/** > * Singleton Session object per thread. > * > **/ > private static ThreadLocal tss = new > ThreadLocal() { > @Override > protected SessionStates initialValue() { > return new SessionStates(); > } > };private static class SessionStates { > private SessionState state; > private HiveConf conf; > private void attach(SessionState state) { > this.state = state; > attach(state.getConf()); > } > private void attach(HiveConf conf) { > this.conf = conf; ClassLoader classLoader = conf.getClassLoader(); > if (classLoader != null) { > Thread.currentThread().setContextClassLoader(classLoader); > } > } > }{noformat} > h2. How to reproduce: > (1) Set {{hive.security.authorization.createtable.group.grants}} to some > value, e.g. {{abc,def:create;xlab,tyx:all;}} > (2) Run test > {{org.apache.hadoop.hive.ql.parse.authorization.TestSessionUserName#testSessionGetGroupNames}} > h2. StackTrace: > {noformat} > java.lang.NullPointerException > > at > org.apache.hadoop.hive.ql.security.authorization.PrivilegeRegistry.getPrivilegeFromRegistry(PrivilegeRegistry.java:77) > at > org.apache.hadoop.hive.ql.security.authorization.PrivilegeRegistry.getPrivilege(PrivilegeRegistry.java:72) > at > org.apache.hadoop.hive.ql.session.CreateTableAutomaticGrant.validatePrivilege(CreateTableAutomaticGrant.java:108) > at > org.apache.hadoop.hive.ql.session.CreateTableAutomaticGrant.getGrantorInfoList(CreateTableAutomaticGrant.java:91) > at > org.apache.hadoop.hive.ql.session.CreateTableAutomaticGrant.getGrantMap(CreateTableAutomaticGrant.java:73) > at > org.apache.hadoop.hive.ql.session.CreateTableAutomaticGrant.create(CreateTableAutomaticGrant.java:47) > at > org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:996) > at > org.apache.hadoop.hive.ql.session.SessionState.getAuthenticator(SessionState.java:1744) > {noformat} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764575#comment-17764575 ] liang yu edited comment on HIVE-27688 at 9/13/23 10:36 AM: --- In branch-3.1 I traced the code of moveTask, and find that when we execute sql "insert overwrite into table partition (XX) select ", it will throw HiveException whose exception message is overwritten by method getHiveException(e, msg). Here is the chain of how it gets overwritten: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) ->replaceFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1817) -> moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3681) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} but this HiveException's message was overwritten by method moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136), it catches the exception and replaced the message with("Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message") But when I execute sql "insert into table partition (XX) select ...", it will throw HiveException which is not overwritten. Here is the chain of how it throws the correct Exception: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1821) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3937) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3389) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} this HiveException was thrown and caught by method loadPartition (org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) whose error message is not overwritten. Solution: I changed the code in org/apache/hadoop/hive/ql/metadata/Hive.java: line 3771 {code:java} throw getHiveException(e, msg); {code} to {code:java} throw getHiveException(e, e.getMessage(), msg){code} and it returns the correct error message In master branch, it's in org/apache/hadoop/hive/ql/metadata/Hive.java: line 5110 was (Author: JIRAUSER299608): I traced the code of moveTask, and find that when we execute sql "insert overwrite into table partition (XX) select ", it will throw HiveException whose exception message is overwritten by method getHiveException(e, msg). Here is the chain of how it gets overwritten: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) ->replaceFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1817) -> moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3681) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} but this HiveException's message was overwritten by method moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136), it catches the exception and replaced the message with("Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message") But when I execute sql "insert into table partition (XX) select ...", it will throw HiveException which is not overwritten. Here is the chain of how it throws the correct Exception: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1821) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3937) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3389) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} this HiveException was thrown and caught by method loadPartition (org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) whose error message is not overwritten. Solution: I changed the code in org/apache/hadoop/hive/ql/metadata/Hive.java: line 3771 {code:java} throw getHiveException(e, msg); {code} to {code:java} throw getHiveException(e, e.getMessage(), msg){code} and it returns the correct error message > hive MoveTask cannot show the correct exception message
[jira] [Comment Edited] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764585#comment-17764585 ] liang yu edited comment on HIVE-27688 at 9/13/23 10:31 AM: --- cc [~aturoczy] [~dkuzmenko] was (Author: JIRAUSER299608): cc [~aturoczy] > hive MoveTask cannot show the correct exception message > --- > > Key: HIVE-27688 > URL: https://issues.apache.org/jira/browse/HIVE-27688 > Project: Hive > Issue Type: Bug >Reporter: liang yu >Assignee: liang yu >Priority: Major > Attachments: image-2023-09-13-17-39-27-864.png, > image-2023-09-13-17-40-02-981.png > > > I am using hive.version=3.1.3; hadoop.version=3.3.4. > Setting hive.load.data.owner to hive, and I used user ubd_by to execute sql. > When I try to insert *overwrite* to an {*}existing table partition{*}, I get > the exception that: Unable to move source /path/to/source to destination > /path/to/dest, which is a very regular error message, gives me no helpful > information. > > {code:java} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.Movelask.Unable to move source > hdfs://xl/user/ubd master/ubd_b_dwa. > db/dwa_m_user/month_id=xxx/prov_id=xx/.staging to destination > hdfs://xl/user/ubd master/ubd_b_dwa. > db/dwa_m_user/month_id=xxx/prov_id=xx/.staging {code} > > But when I try to insert *into* an {*}existing table partition{*}, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest as the file is not owned by hive and load data is also not ran > as hive. which gives me a very helpful error message that I should change the > hive.load.data.owner to hive. > > {code:java} > FAILED: Execution Error, return code l from org. > apache.hadoop.hive.ql.metadata.HiveException: org. > apache.hadoop.hive.gl.exec.Movelask: Load Data failed for hdfs://xl/user/ubd > master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/ > hive-staging hive 2023-09-13 17-34-31 302 5892190500368248766-1/-ext-1 as > the file is not owned by hive and load data is also not ran as hive{code} > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764585#comment-17764585 ] liang yu commented on HIVE-27688: - cc [~aturoczy] > hive MoveTask cannot show the correct exception message > --- > > Key: HIVE-27688 > URL: https://issues.apache.org/jira/browse/HIVE-27688 > Project: Hive > Issue Type: Bug >Reporter: liang yu >Assignee: liang yu >Priority: Major > Attachments: image-2023-09-13-17-39-27-864.png, > image-2023-09-13-17-40-02-981.png > > > I am using hive.version=3.1.3; hadoop.version=3.3.4. > Setting hive.load.data.owner to hive, and I used user ubd_by to execute sql. > When I try to insert *overwrite* to an {*}existing table partition{*}, I get > the exception that: Unable to move source /path/to/source to destination > /path/to/dest, which is a very regular error message, gives me no helpful > information. > > {code:java} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.Movelask.Unable to move source > hdfs://xl/user/ubd master/ubd_b_dwa. > db/dwa_m_user/month_id=xxx/prov_id=xx/.staging to destination > hdfs://xl/user/ubd master/ubd_b_dwa. > db/dwa_m_user/month_id=xxx/prov_id=xx/.staging {code} > > But when I try to insert *into* an {*}existing table partition{*}, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest as the file is not owned by hive and load data is also not ran > as hive. which gives me a very helpful error message that I should change the > hive.load.data.owner to hive. > > {code:java} > FAILED: Execution Error, return code l from org. > apache.hadoop.hive.ql.metadata.HiveException: org. > apache.hadoop.hive.gl.exec.Movelask: Load Data failed for hdfs://xl/user/ubd > master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/ > hive-staging hive 2023-09-13 17-34-31 302 5892190500368248766-1/-ext-1 as > the file is not owned by hive and load data is also not ran as hive{code} > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764575#comment-17764575 ] liang yu edited comment on HIVE-27688 at 9/13/23 10:19 AM: --- I traced the code of moveTask, and find that when we execute sql "insert overwrite into table partition (XX) select ", it will throw HiveException whose exception message is overwritten by method getHiveException(e, msg). Here is the chain of how it gets overwritten: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) ->replaceFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1817) -> moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3681) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} but this HiveException's message was overwritten by method moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136), it catches the exception and replaced the message with("Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message") But when I execute sql "insert into table partition (XX) select ...", it will throw HiveException which is not overwritten. Here is the chain of how it throws the correct Exception: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1821) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3937) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3389) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} this HiveException was thrown and caught by method loadPartition (org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) whose error message is not overwritten. Solution: I changed the code in org/apache/hadoop/hive/ql/metadata/Hive.java: line 3771 {code:java} throw getHiveException(e, msg); {code} to {code:java} throw getHiveException(e, e.getMessage(), msg){code} and it returns the correct error message was (Author: JIRAUSER299608): I traced the code of moveTask, and find that when we execute sql "insert overwrite into table partition (XX) select ", it will throw HiveException whose exception message is overwritten by method getHiveException(e, msg). Here is the chain of how it gets overwritten: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) ->replaceFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1817) -> moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3681) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} but this HiveException's message was overwritten by method moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136), it catches the exception and replaced the message with("Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message") But when I execute sql "insert into table partition (XX) select ...", it will throw HiveException which is not overwritten. Here is the chain of how it throws the correct Exception: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1821) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3937) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3389) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} this HiveException was thrown and caught by method loadPartition (org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) whose error message is not overwritten. > hive MoveTask cannot show the correct exception message > --- > > Key: HIVE-27688 > URL: https://issues.apache.org/jira/browse/HIVE-27688 > Project: Hive > Issue Type: Bug >Reporter: liang yu >Assignee: liang yu >Priority: Major > Attachments:
[jira] [Updated] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liang yu updated HIVE-27688: Description: I am using hive.version=3.1.3; hadoop.version=3.3.4. Setting hive.load.data.owner to hive, and I used user ubd_by to execute sql. When I try to insert *overwrite* to an {*}existing table partition{*}, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message, gives me no helpful information. {code:java} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.Movelask.Unable to move source hdfs://xl/user/ubd master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/.staging to destination hdfs://xl/user/ubd master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/.staging {code} But when I try to insert *into* an {*}existing table partition{*}, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest as the file is not owned by hive and load data is also not ran as hive. which gives me a very helpful error message that I should change the hive.load.data.owner to hive. {code:java} FAILED: Execution Error, return code l from org. apache.hadoop.hive.ql.metadata.HiveException: org. apache.hadoop.hive.gl.exec.Movelask: Load Data failed for hdfs://xl/user/ubd master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/ hive-staging hive 2023-09-13 17-34-31 302 5892190500368248766-1/-ext-1 as the file is not owned by hive and load data is also not ran as hive{code} was: Setting hive.load.data.owner to hive. When I try to insert *overwrite* to an {*}existing table partition{*}, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message, gives me no helpful information. {code:java} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.Movelask.Unable to move source hdfs://xl/user/ubd master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/.staging to destination hdfs://xl/user/ubd master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/.staging {code} But when I try to insert *into* an {*}existing table partition{*}, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest as the file is not owned by hive and load data is also not ran as hive. which gives me a very helpful error message that I should change the hive.load.data.owner to hive. {code:java} FAILED: Execution Error, return code l from org. apache.hadoop.hive.ql.metadata.HiveException: org. apache.hadoop.hive.gl.exec.Movelask: Load Data failed for hdfs://xl/user/ubd master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/ hive-staging hive 2023-09-13 17-34-31 302 5892190500368248766-1/-ext-1 as the file is not owned by hive and load data is also not ran as hive{code} > hive MoveTask cannot show the correct exception message > --- > > Key: HIVE-27688 > URL: https://issues.apache.org/jira/browse/HIVE-27688 > Project: Hive > Issue Type: Bug >Reporter: liang yu >Assignee: liang yu >Priority: Major > Attachments: image-2023-09-13-17-39-27-864.png, > image-2023-09-13-17-40-02-981.png > > > I am using hive.version=3.1.3; hadoop.version=3.3.4. > Setting hive.load.data.owner to hive, and I used user ubd_by to execute sql. > When I try to insert *overwrite* to an {*}existing table partition{*}, I get > the exception that: Unable to move source /path/to/source to destination > /path/to/dest, which is a very regular error message, gives me no helpful > information. > > {code:java} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.Movelask.Unable to move source > hdfs://xl/user/ubd master/ubd_b_dwa. > db/dwa_m_user/month_id=xxx/prov_id=xx/.staging to destination > hdfs://xl/user/ubd master/ubd_b_dwa. > db/dwa_m_user/month_id=xxx/prov_id=xx/.staging {code} > > But when I try to insert *into* an {*}existing table partition{*}, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest as the file is not owned by hive and load data is also not ran > as hive. which gives me a very helpful error message that I should change the > hive.load.data.owner to hive. > > {code:java} > FAILED: Execution Error, return code l from org. > apache.hadoop.hive.ql.metadata.HiveException: org. > apache.hadoop.hive.gl.exec.Movelask: Load Data failed for hdfs://xl/user/ubd > master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/ > hive-staging hive 2023-09-13 17-34-31 302 5892190500368248766-1/-ext-1 as > the file is not owned by hive and load data is also not ran as hive{code} > > > -- This message was
[jira] [Updated] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liang yu updated HIVE-27688: Description: Setting hive.load.data.owner to hive. When I try to insert *overwrite* to an {*}existing table partition{*}, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message, gives me no helpful information. {code:java} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.Movelask.Unable to move source hdfs://xl/user/ubd master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/.staging to destination hdfs://xl/user/ubd master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/.staging {code} But when I try to insert *into* an {*}existing table partition{*}, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest as the file is not owned by hive and load data is also not ran as hive. which gives me a very helpful error message that I should change the hive.load.data.owner to hive. {code:java} FAILED: Execution Error, return code l from org. apache.hadoop.hive.ql.metadata.HiveException: org. apache.hadoop.hive.gl.exec.Movelask: Load Data failed for hdfs://xl/user/ubd master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/ hive-staging hive 2023-09-13 17-34-31 302 5892190500368248766-1/-ext-1 as the file is not owned by hive and load data is also not ran as hive{code} was: Setting hive.load.data.owner to hive. When I try to insert overwrite to an existing table partition, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message, gives me no helpful information. !image-2023-09-13-17-40-02-981.png! But when I try to insert into an existing table partition, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest as the file is not owned by hive and load data is also not ran as hive. which gives me a very helpful error message that I should change the hive.load.data.owner to hive. !image-2023-09-13-17-39-27-864.png! > hive MoveTask cannot show the correct exception message > --- > > Key: HIVE-27688 > URL: https://issues.apache.org/jira/browse/HIVE-27688 > Project: Hive > Issue Type: Bug >Reporter: liang yu >Assignee: liang yu >Priority: Major > Attachments: image-2023-09-13-17-39-27-864.png, > image-2023-09-13-17-40-02-981.png > > > Setting hive.load.data.owner to hive. > When I try to insert *overwrite* to an {*}existing table partition{*}, I get > the exception that: Unable to move source /path/to/source to destination > /path/to/dest, which is a very regular error message, gives me no helpful > information. > > {code:java} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.Movelask.Unable to move source > hdfs://xl/user/ubd master/ubd_b_dwa. > db/dwa_m_user/month_id=xxx/prov_id=xx/.staging to destination > hdfs://xl/user/ubd master/ubd_b_dwa. > db/dwa_m_user/month_id=xxx/prov_id=xx/.staging {code} > > > > But when I try to insert *into* an {*}existing table partition{*}, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest as the file is not owned by hive and load data is also not ran > as hive. which gives me a very helpful error message that I should change the > hive.load.data.owner to hive. > > {code:java} > FAILED: Execution Error, return code l from org. > apache.hadoop.hive.ql.metadata.HiveException: org. > apache.hadoop.hive.gl.exec.Movelask: Load Data failed for hdfs://xl/user/ubd > master/ubd_b_dwa. db/dwa_m_user/month_id=xxx/prov_id=xx/ > hive-staging hive 2023-09-13 17-34-31 302 5892190500368248766-1/-ext-1 as > the file is not owned by hive and load data is also not ran as hive{code} > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764575#comment-17764575 ] liang yu edited comment on HIVE-27688 at 9/13/23 10:06 AM: --- I traced the code of moveTask, and find that when we execute sql "insert overwrite into table partition (XX) select ", it will throw HiveException whose exception message is overwritten by method getHiveException(e, msg). Here is the chain of how it gets overwritten: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) ->replaceFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1817) -> moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3681) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} but this HiveException's message was overwritten by method moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136), it catches the exception and replaced the message with("Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message") But when I execute sql "insert into table partition (XX) select ...", it will throw HiveException which is not overwritten. Here is the chain of how it throws the correct Exception: {code:java} execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1821) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3937) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3389) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) {code} this HiveException was thrown and caught by method loadPartition (org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) whose error message is not overwritten. was (Author: JIRAUSER299608): I traced the code of moveTask, and find that when we execute sql "insert overwrite into table partition (XX) select ", it will throw HiveException whose exception message is overwritten by method getHiveException(e, msg). Here is the chain of how it gets overwritten: execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)-> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)-> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) ->replaceFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1817) -> moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3681) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) but this HiveException's message was overwritten by method moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136), it catches the exception and replaced the message with("Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message") But when I execute sql "insert into table partition (XX) select ...", it will throw HiveException which is not overwritten. Here is the chain of how it throws the correct Exception: execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)-> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)-> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1821) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3937) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3389) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) this HiveException was thrown and caught by method loadPartition (org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) whose error message is not overwritten. > hive MoveTask cannot show the correct exception message > --- > > Key: HIVE-27688 > URL: https://issues.apache.org/jira/browse/HIVE-27688 > Project: Hive > Issue Type: Bug >Reporter: liang yu >Assignee: liang yu >Priority: Major > Attachments: image-2023-09-13-17-39-27-864.png, > image-2023-09-13-17-40-02-981.png > > > Setting hive.load.data.owner to hive. > When I try to insert overwrite to an existing table partition, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest, which is a very regular error message,
[jira] [Commented] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764575#comment-17764575 ] liang yu commented on HIVE-27688: - I traced the code of moveTask, and find that when we execute sql "insert overwrite into table partition (XX) select ", it will throw HiveException whose exception message is overwritten by method getHiveException(e, msg). Here is the chain of how it gets overwritten: execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)-> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)-> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) ->replaceFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1817) -> moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3681) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) but this HiveException's message was overwritten by method moveFile(org/apache/hadoop/hive/ql/metadata/Hive.java: line 4136), it catches the exception and replaced the message with("Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message") But when I execute sql "insert into table partition (XX) select ...", it will throw HiveException which is not overwritten. Here is the chain of how it throws the correct Exception: execute(handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 273)-> handleStaticParts(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 392)-> loadPartition(org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 1821) -> copyFiles(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3937) -> needToCopy(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3389) -> throw HiveException(org/apache/hadoop/hive/ql/metadata/Hive.java: line 3856) this HiveException was thrown and caught by method loadPartition (org/apache/hadoop/hive/ql/exec/MoveTask.java:line 472) whose error message is not overwritten. > hive MoveTask cannot show the correct exception message > --- > > Key: HIVE-27688 > URL: https://issues.apache.org/jira/browse/HIVE-27688 > Project: Hive > Issue Type: Bug >Reporter: liang yu >Assignee: liang yu >Priority: Major > Attachments: image-2023-09-13-17-39-27-864.png, > image-2023-09-13-17-40-02-981.png > > > Setting hive.load.data.owner to hive. > When I try to insert overwrite to an existing table partition, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest, which is a very regular error message, gives me no helpful > information. > !image-2023-09-13-17-40-02-981.png! > > But when I try to insert into an existing table partition, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest as the file is not owned by hive and load data is also not ran > as hive. which gives me a very helpful error message that I should change the > hive.load.data.owner to hive. > !image-2023-09-13-17-39-27-864.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liang yu updated HIVE-27688: Attachment: image-2023-09-13-17-40-02-981.png image-2023-09-13-17-39-27-864.png image-2023-09-13-17-38-58-508.png Description: Setting hive.load.data.owner to hive. When I try to insert overwrite to an existing table partition, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message, gives me no helpful information. !image-2023-09-13-17-40-02-981.png! But when I try to insert into an existing table partition, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest as the file is not owned by hive and load data is also not ran as hive. which gives me a very helpful error message that I should change the hive.load.data.owner to hive. !image-2023-09-13-17-39-27-864.png! was: Setting hive.load.data.owner to hive. When I try to insert overwrite to an existing table partition, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message, gives me no helpful information. But when I try to insert into an existing table partition, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest as the file is not owned by hive and load data is also not ran as hive. which gives me a very helpful error message that I should change the hive.load.data.owner to hive. > hive MoveTask cannot show the correct exception message > --- > > Key: HIVE-27688 > URL: https://issues.apache.org/jira/browse/HIVE-27688 > Project: Hive > Issue Type: Bug >Reporter: liang yu >Assignee: liang yu >Priority: Major > Attachments: image-2023-09-13-17-39-27-864.png, > image-2023-09-13-17-40-02-981.png > > > Setting hive.load.data.owner to hive. > When I try to insert overwrite to an existing table partition, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest, which is a very regular error message, gives me no helpful > information. > !image-2023-09-13-17-40-02-981.png! > > But when I try to insert into an existing table partition, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest as the file is not owned by hive and load data is also not ran > as hive. which gives me a very helpful error message that I should change the > hive.load.data.owner to hive. > !image-2023-09-13-17-39-27-864.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27688) hive MoveTask cannot show the correct exception message
[ https://issues.apache.org/jira/browse/HIVE-27688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liang yu updated HIVE-27688: Attachment: (was: image-2023-09-13-17-38-58-508.png) > hive MoveTask cannot show the correct exception message > --- > > Key: HIVE-27688 > URL: https://issues.apache.org/jira/browse/HIVE-27688 > Project: Hive > Issue Type: Bug >Reporter: liang yu >Assignee: liang yu >Priority: Major > Attachments: image-2023-09-13-17-39-27-864.png, > image-2023-09-13-17-40-02-981.png > > > Setting hive.load.data.owner to hive. > When I try to insert overwrite to an existing table partition, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest, which is a very regular error message, gives me no helpful > information. > !image-2023-09-13-17-40-02-981.png! > > But when I try to insert into an existing table partition, I get the > exception that: Unable to move source /path/to/source to destination > /path/to/dest as the file is not owned by hive and load data is also not ran > as hive. which gives me a very helpful error message that I should change the > hive.load.data.owner to hive. > !image-2023-09-13-17-39-27-864.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27688) hive MoveTask cannot show the correct exception message
liang yu created HIVE-27688: --- Summary: hive MoveTask cannot show the correct exception message Key: HIVE-27688 URL: https://issues.apache.org/jira/browse/HIVE-27688 Project: Hive Issue Type: Bug Reporter: liang yu Assignee: liang yu Setting hive.load.data.owner to hive. When I try to insert overwrite to an existing table partition, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest, which is a very regular error message, gives me no helpful information. But when I try to insert into an existing table partition, I get the exception that: Unable to move source /path/to/source to destination /path/to/dest as the file is not owned by hive and load data is also not ran as hive. which gives me a very helpful error message that I should change the hive.load.data.owner to hive. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-17350) metrics errors when retrying HS2 startup
[ https://issues.apache.org/jira/browse/HIVE-17350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HIVE-17350: Fix Version/s: 4.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) > metrics errors when retrying HS2 startup > > > Key: HIVE-17350 > URL: https://issues.apache.org/jira/browse/HIVE-17350 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Mayank Kunwar >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Looks like there are some sort of retries that happen when HS2 init fails. > When HS2 startup fails for an unrelated reason and is retried, the metrics > source initialization fails on subsequent attempts. > {noformat} > 2017-08-15T23:31:47,650 WARN [main]: impl.MetricsSystemImpl > (MetricsSystemImpl.java:init(152)) - hiveserver2 metrics system already > initialized! > 2017-08-15T23:31:47,650 ERROR [main]: metastore.HiveMetaStore > (HiveMetaStore.java:init(438)) - error in Metrics init: > java.lang.reflect.InvocationTargetException null > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hive.common.metrics.common.MetricsFactory.init(MetricsFactory.java:42) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:435) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:79) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:92) > at > org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6892) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:140) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1653) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104) > at > org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3612) > at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3664) > at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3644) > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582) > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:545) > at > org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:128) > at org.apache.hive.service.cli.CLIService.init(CLIService.java:113) > at > org.apache.hive.service.CompositeService.init(CompositeService.java:59) > at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:139) > at > org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:595) > at > org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:97) > at > org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:843) > at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:712) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at
[jira] [Commented] (HIVE-17350) metrics errors when retrying HS2 startup
[ https://issues.apache.org/jira/browse/HIVE-17350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17764541#comment-17764541 ] Ayush Saxena commented on HIVE-17350: - Committed to master. Thanx [~mkunwar] for the contribution & [~abstractdog] for the review!!! > metrics errors when retrying HS2 startup > > > Key: HIVE-17350 > URL: https://issues.apache.org/jira/browse/HIVE-17350 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Mayank Kunwar >Priority: Major > Labels: pull-request-available > > Looks like there are some sort of retries that happen when HS2 init fails. > When HS2 startup fails for an unrelated reason and is retried, the metrics > source initialization fails on subsequent attempts. > {noformat} > 2017-08-15T23:31:47,650 WARN [main]: impl.MetricsSystemImpl > (MetricsSystemImpl.java:init(152)) - hiveserver2 metrics system already > initialized! > 2017-08-15T23:31:47,650 ERROR [main]: metastore.HiveMetaStore > (HiveMetaStore.java:init(438)) - error in Metrics init: > java.lang.reflect.InvocationTargetException null > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hive.common.metrics.common.MetricsFactory.init(MetricsFactory.java:42) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:435) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:79) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:92) > at > org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6892) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:140) > at > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:74) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1653) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104) > at > org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3612) > at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3664) > at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3644) > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582) > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:545) > at > org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:128) > at org.apache.hive.service.cli.CLIService.init(CLIService.java:113) > at > org.apache.hive.service.CompositeService.init(CompositeService.java:59) > at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:139) > at > org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:595) > at > org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:97) > at > org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:843) > at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:712) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >