d. If grained recovery (feature added 1.9.0) is enabled, the graph
> would not be restarted when task failures happen and the "fullRestart"
> value will not increment in such cases.
>
> I'd appreciate if you can help with these questions and we can make better
> decisi
etrics you add in you
> customized restart strategy?
>
> Thanks,
> Zhu Zhu
>
> Steven Wu 于2019年9月20日周五 上午7:11写道:
>
>> We do use config like "restart-strategy:
>> org.foobar.MyRestartStrategyFactoryFactory". Mainly to add additional
>> metrics than the
We do use config like "restart-strategy:
org.foobar.MyRestartStrategyFactoryFactory". Mainly to add additional
metrics than the Flink provided ones.
On Thu, Sep 19, 2019 at 4:50 AM Zhu Zhu wrote:
> Thanks everyone for the input.
>
> The RestartStrategy customization is not recognized as a
sion we make this time, I'd
>> suggest to make it final and document in our release note explicitly.
>> Checking the 1.5.0 release note [1] [2] it seems we didn't mention about
>> the change on default restart delay and we'd better learn from it this
>> time. Thanks.
>&
+1 on what Zhu Zhu said.
We also override the default to 10 s.
On Fri, Aug 30, 2019 at 8:58 PM Zhu Zhu wrote:
> In our production, we usually override the restart delay to be 10 s.
> We once encountered cases that external services are overwhelmed by
> reconnections from frequent restarted
long and sometimes unstable build is definitely a pain point.
I suspect the build failure here in flink-connector-kafka is not related to
my change. but there is no easy re-run the build on travis UI. Google
search showed a trick of close-and-open the PR will trigger rebuild. but
that could add
this is an awesome feature.
> The name "Savepoint Connector" might indeed be not that good, as it
doesn't
point out the fact that with the current design, all kinds of snapshots
(savepoint / full or incremental checkpoints) can be read.
@Gordon can you add the above clarification to the FLIP
This proposal mentioned that SplitEnumerator might run on the JobManager or
in a single task on a TaskManager.
if enumerator is a single task on a taskmanager, then the job DAG can never
been embarrassingly parallel anymore. That will nullify the leverage of
fine-grained recovery for
> And each split has its own (internal) thread for reading from Kafka and
putting messages in an internal queue to pull from. This is similar to how
the current Kafka source is implemented, which has a separate fetcher
thread.
Aljoscha, in kafka case, one split may contain multiple kafka
Please prioritize a proper long-term fix for this issue. it is a big
scalability issue for high-parallelism job (e.g. over 1,000).
FLINK-10122 KafkaConsumer should use partitionable state over union state
if partition discovery is not active
On Fri, Sep 28, 2018 at 7:20 AM Till Rohrmann wrote:
ble to ya as a
> >> contributor anytime.
> >>
> >>
> >>
> >> On Thu, Apr 26, 2018 at 4:40 PM, TechnoMage <mla...@technomage.com>
> wrote:
> >>
> >>> Contributor permission is only granted after siginificant
> contributions.
> >&g
Can someone grant me the contributor permission? we are thinking about
contributing to FLINK-9061. My jira id is "stevenz3wu"
Thanks,
Steven
can we add these two? They can make fine-grained recovery more consistent.
https://issues.apache.org/jira/browse/FLINK-8042
https://issues.apache.org/jira/browse/FLINK-8043
On Tue, Jan 16, 2018 at 8:35 AM, Timo Walther wrote:
> @Jincheng: Yes, I think we should include the
101 - 113 of 113 matches
Mail list logo