[jira] [Updated] (FLINK-13998) Fix ORC test failure with Hive 2.0.x
[ https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-13998: --- Labels: auto-unassigned (was: stale-assigned) > Fix ORC test failure with Hive 2.0.x > > > Key: FLINK-13998 > URL: https://issues.apache.org/jira/browse/FLINK-13998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Labels: auto-unassigned > Fix For: 1.14.0 > > > Our test is using local file system, and orc in HIve 2.0.x seems having issue > with that. > {code} > 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils > - Failed to get files with ID; using regular API > java.lang.UnsupportedOperationException: Only supported for DFS; got class > org.apache.hadoop.fs.LocalFileSystem > at > org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875) > [hive-exec-2.0.0.jar:2.0.0] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13998) Fix ORC test failure with Hive 2.0.x
[ https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-13998: --- Labels: stale-assigned (was: ) > Fix ORC test failure with Hive 2.0.x > > > Key: FLINK-13998 > URL: https://issues.apache.org/jira/browse/FLINK-13998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Labels: stale-assigned > Fix For: 1.14.0 > > > Our test is using local file system, and orc in HIve 2.0.x seems having issue > with that. > {code} > 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils > - Failed to get files with ID; using regular API > java.lang.UnsupportedOperationException: Only supported for DFS; got class > org.apache.hadoop.fs.LocalFileSystem > at > org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875) > [hive-exec-2.0.0.jar:2.0.0] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13998) Fix ORC test failure with Hive 2.0.x
[ https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-13998: Fix Version/s: (was: 1.13.0) 1.14.0 > Fix ORC test failure with Hive 2.0.x > > > Key: FLINK-13998 > URL: https://issues.apache.org/jira/browse/FLINK-13998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Fix For: 1.14.0 > > > Our test is using local file system, and orc in HIve 2.0.x seems having issue > with that. > {code} > 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils > - Failed to get files with ID; using regular API > java.lang.UnsupportedOperationException: Only supported for DFS; got class > org.apache.hadoop.fs.LocalFileSystem > at > org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875) > [hive-exec-2.0.0.jar:2.0.0] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13998) Fix ORC test failure with Hive 2.0.x
[ https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leonard Xu updated FLINK-13998: --- Fix Version/s: (was: 1.12.0) 1.13.0 > Fix ORC test failure with Hive 2.0.x > > > Key: FLINK-13998 > URL: https://issues.apache.org/jira/browse/FLINK-13998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Fix For: 1.13.0 > > > Our test is using local file system, and orc in HIve 2.0.x seems having issue > with that. > {code} > 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils > - Failed to get files with ID; using regular API > java.lang.UnsupportedOperationException: Only supported for DFS; got class > org.apache.hadoop.fs.LocalFileSystem > at > org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875) > [hive-exec-2.0.0.jar:2.0.0] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13998) Fix ORC test failure with Hive 2.0.x
[ https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danny Chen updated FLINK-13998: --- Fix Version/s: (was: 1.11.0) 1.12.0 > Fix ORC test failure with Hive 2.0.x > > > Key: FLINK-13998 > URL: https://issues.apache.org/jira/browse/FLINK-13998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Fix For: 1.12.0 > > > Our test is using local file system, and orc in HIve 2.0.x seems having issue > with that. > {code} > 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils > - Failed to get files with ID; using regular API > java.lang.UnsupportedOperationException: Only supported for DFS; got class > org.apache.hadoop.fs.LocalFileSystem > at > org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875) > [hive-exec-2.0.0.jar:2.0.0] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13998) Fix ORC test failure with Hive 2.0.x
[ https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee updated FLINK-13998: - Fix Version/s: (was: 1.10.0) 1.11.0 > Fix ORC test failure with Hive 2.0.x > > > Key: FLINK-13998 > URL: https://issues.apache.org/jira/browse/FLINK-13998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Fix For: 1.11.0 > > > Our test is using local file system, and orc in HIve 2.0.x seems having issue > with that. > {code} > 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils > - Failed to get files with ID; using regular API > java.lang.UnsupportedOperationException: Only supported for DFS; got class > org.apache.hadoop.fs.LocalFileSystem > at > org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875) > [hive-exec-2.0.0.jar:2.0.0] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-13998) Fix ORC test failure with Hive 2.0.x
[ https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated FLINK-13998: Description: Our test is using local file system, and orc in HIve 2.0.x seems having issue with that. {code} 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils - Failed to get files with ID; using regular API java.lang.UnsupportedOperationException: Only supported for DFS; got class org.apache.hadoop.fs.LocalFileSystem at org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) ~[hive-exec-2.0.0.jar:2.0.0] at org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784) ~[hive-exec-2.0.0.jar:2.0.0] at org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) [hive-exec-2.0.0.jar:2.0.0] at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890) [hive-exec-2.0.0.jar:2.0.0] at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875) [hive-exec-2.0.0.jar:2.0.0] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] {code} was:Including 2.0.0 and 2.0.1. > Fix ORC test failure with Hive 2.0.x > > > Key: FLINK-13998 > URL: https://issues.apache.org/jira/browse/FLINK-13998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Fix For: 1.10.0 > > > Our test is using local file system, and orc in HIve 2.0.x seems having issue > with that. > {code} > 06:54:43.156 [ORC_GET_SPLITS #0] ERROR org.apache.hadoop.hive.ql.io.AcidUtils > - Failed to get files with ID; using regular API > java.lang.UnsupportedOperationException: Only supported for DFS; got class > org.apache.hadoop.fs.LocalFileSystem > at > org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784) > ~[hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:477) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:890) > [hive-exec-2.0.0.jar:2.0.0] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:875) > [hive-exec-2.0.0.jar:2.0.0] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] > {code} -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-13998) Fix ORC test failure with Hive 2.0.x
[ https://issues.apache.org/jira/browse/FLINK-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated FLINK-13998: Labels: (was: pull-request-available) > Fix ORC test failure with Hive 2.0.x > > > Key: FLINK-13998 > URL: https://issues.apache.org/jira/browse/FLINK-13998 > Project: Flink > Issue Type: Improvement > Components: Connectors / Hive >Reporter: Xuefu Zhang >Assignee: Xuefu Zhang >Priority: Major > Fix For: 1.10.0 > > > Including 2.0.0 and 2.0.1. -- This message was sent by Atlassian Jira (v8.3.2#803003)