嗯,谢谢建议,等再出现问题我试试,现在重启后还好,目前感觉是长时间运行后的集群才会出现。
Lijie Wang 于2022年9月29日周四 10:17写道:
>
> Hi,
>
> 可以尝试增大一下 taskmanager.network.request-backoff.max 的值。默认值是 1,也就是 10 s。
> 上下游可能是并发部署的,所以是有可能下游请求 partition 时,上游还没部署完成,增大
> taskmanager.network.request-backoff.max 可以增加下游的等待时间和重试次数,减小出现
>
Hi,
可以尝试增大一下 taskmanager.network.request-backoff.max 的值。默认值是 1,也就是 10 s。
上下游可能是并发部署的,所以是有可能下游请求 partition 时,上游还没部署完成,增大
taskmanager.network.request-backoff.max 可以增加下游的等待时间和重试次数,减小出现
PartitionNotFoundException 的概率。
Best,
Lijie
yidan zhao 于2022年9月28日周三 17:35写道:
>
你好,我想问一下,如果来源于Kakfka的一条数据出现错误,会导致任务执行失败,日志抛出该条错误数据。
为保证任务执行,需要在*** WITH内加'value.json.ignore-parse-errors' = 'true',
'value.json.fail-on-missing-field' = 'false'
那么之后如果出现异常的数据,我应该怎么感知到呢??
Hello,
I have a flink streaming job that consumes data from RMQ source, process it
and output it to RMQ sink.
I want to delete the RMQ source queue when cancelling the job but keep it
if the job failed to resume processing it's data when the job is restarted.
One solution is to override the
Thank you, I have tried both approaches, Overriding open method did not work
but by implementing CheckpointedFunction and overriding initializeState I was
able to access and operate over broadcast state
@Override
public void initializeState(FunctionInitializationContext context) throws
Hi Marco,
The email is received by the list, but no answers have yet been provided
unfortunately.
Best regards,
Martijn
On Wed, Sep 28, 2022 at 4:11 PM Marco Villalobos
wrote:
> Did this list receive my email?
>
> I’m only asking because my last few questions have gone unanswered and
> maybe
Did this list receive my email?
I’m only asking because my last few questions have gone unanswered and maybe
the list server is blocking me.
Anybody, please let me know.
> On Sep 26, 2022, at 8:41 PM, Marco Villalobos
> wrote:
>
> I indeed see the value of Flink Stateful Functions.
>
>
Thanks Xingbo for releasing it.
Best,
Jingsong
On Wed, Sep 28, 2022 at 10:52 AM Xingbo Huang wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.14.6, which is the fifth bugfix release for the Apache Flink 1.14
> series.
>
> Apache Flink® is an
Thanks Xingbo for releasing it.
Best,
Jingsong
On Wed, Sep 28, 2022 at 10:52 AM Xingbo Huang wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.14.6, which is the fifth bugfix release for the Apache Flink 1.14
> series.
>
> Apache Flink® is an
按照flink的设计,存在上游还没部署成功,下游就开始请求 partition 的情况吗? 此外,上游没有部署成功一般会有相关日志不?
我目前重启了集群后OK了,在等段时间,看看还会不会出现。
Shammon FY 于2022年9月28日周三 15:45写道:
>
> Hi
>
> 计算任务输出PartitionNotFoundException,原因是它向上游TaskManager发送partition
> request请求,上游TaskManager的netty server接收到partition request后发现它请求的上游计算任务没有部署成功。
>
Hi John
Yes, the whole TaskManager exited because the task did not react to
cancelling signal in time
```
2022-08-30 09:14:22,138 ERROR
org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Task
did not exit gracefully within 180 + seconds.
Hi
计算任务输出PartitionNotFoundException,原因是它向上游TaskManager发送partition
request请求,上游TaskManager的netty server接收到partition request后发现它请求的上游计算任务没有部署成功。
所以从这个异常错误来看netty连接是通的,你可能需要根据输出PartitionNotFoundException信息的计算任务,查一下它的上游计算任务为什么没有部署成功
On Tue, Sep 27, 2022 at 10:20 PM yidan zhao wrote:
>
Hi guys,
I was trying to use flink sql to consume data from kafka source, the format of
which is debezium-avro-confluent. And I encountered a AvroTypeException saying
that "Found something, expecting union", where something is not a type but a
field that I defined in the schema registery.
So
谢谢你的建议,不过我是新手,没找到解析Job Graph的方法。
> 2022年9月28日 14:46,zilong xiao 写道:
>
> 可以尝试解析下作业的Job Graph
>
> BIGO 于2022年9月28日周三 14:25写道:
>
>> 大家好。
>> 我需要为一个之前没有设置算子uid的作业设置uid,并且不能抛弃savepoint数据。
>> 我现在的问题是不知道如何确认多个算子和flink为之默认生成的uidHash之间的映射关系。
>> 麻烦大佬指教,谢谢。
可以尝试解析下作业的Job Graph
BIGO 于2022年9月28日周三 14:25写道:
> 大家好。
> 我需要为一个之前没有设置算子uid的作业设置uid,并且不能抛弃savepoint数据。
> 我现在的问题是不知道如何确认多个算子和flink为之默认生成的uidHash之间的映射关系。
> 麻烦大佬指教,谢谢。
大家好。
我需要为一个之前没有设置算子uid的作业设置uid,并且不能抛弃savepoint数据。
我现在的问题是不知道如何确认多个算子和flink为之默认生成的uidHash之间的映射关系。
麻烦大佬指教,谢谢。
16 matches
Mail list logo