除此以外,FlinkSQL读现有的hive数据仓库也是失败。配置okhive的catalog,表信息都能出来,但select操作就是失败。

赵一旦 <hinobl...@gmail.com> 于2021年1月21日周四 下午5:18写道:

> 具体报错信息如下:
>
> java.lang.UnsupportedOperationException: Recoverable writers on Hadoop
> are only supported for HDFS
>     at org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.<init>(
> HadoopRecoverableWriter.java:61)
>     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem
> .createRecoverableWriter(HadoopFileSystem.java:210)
>     at org.apache.flink.core.fs.SafetyNetWrapperFileSystem
> .createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>     at org.apache.flink.streaming.api.functions.sink.filesystem.
> StreamingFileSink$RowFormatBuilder.createBucketWriter(StreamingFileSink
> .java:260)
>     at org.apache.flink.streaming.api.functions.sink.filesystem.
> StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:
> 270)
>     at org.apache.flink.streaming.api.functions.sink.filesystem.
> StreamingFileSink.initializeState(StreamingFileSink.java:412)
>     at org.apache.flink.streaming.util.functions.StreamingFunctionUtils
> .tryRestoreFunction(StreamingFunctionUtils.java:185)
>     at org.apache.flink.streaming.util.functions.StreamingFunctionUtils
> .restoreFunctionState(StreamingFunctionUtils.java:167)
>     at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator
> .initializeState(AbstractUdfStreamOperator.java:96)
>     at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler
> .initializeOperatorState(StreamOperatorStateHandler.java:107)
>     at org.apache.flink.streaming.api.operators.AbstractStreamOperator
> .initializeState(AbstractStreamOperator.java:264)
>     at org.apache.flink.streaming.runtime.tasks.OperatorChain
> .initializeStateAndOpenOperators(OperatorChain.java:400)
>     at org.apache.flink.streaming.runtime.tasks.StreamTask
> .lambda$beforeInvoke$2(StreamTask.java:507)
>     at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1
> .runThrowing(StreamTaskActionExecutor.java:47)
>     at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(
> StreamTask.java:501)
>     at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(
> StreamTask.java:531)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:547)
>     at java.lang.Thread.run(Thread.java:748)
>
>
> 赵一旦 <hinobl...@gmail.com> 于2021年1月21日周四 下午5:17写道:
>
>> Recoverable writers on Hadoop are only supported for HDFS
>>
>> 如上,我们用的hadoop协议的,但底层不是hdfs,是公司自研的分布式文件系统。
>>
>> 使用spark写,spark-sql读等都没问题。但是flink写和读当前都没尝试成功。
>>
>>
>>

回复