Github user zhenzhongxu commented on the issue:
https://github.com/apache/flink/pull/4149
thanks @aljoscha for the suggestion. KeyedDeserializationSchema seems like
a better approach. I'll close this PR.
---
If your project is set up for it, you can reply to this email and have your
Github user aljoscha commented on the issue:
https://github.com/apache/flink/pull/4149
@zhenzhongxu Have you looked into `KeyedDeserializationSchema`? In there
you have access to the partition and the offset at which the record originated.
---
If your project is set up for it, you
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4149
@zhenzhongxu at the very least, I would expose these new values as
protected methods, and let the fields be private. Then, have particular tests
that verify the values are correct.
The
Github user zhenzhongxu commented on the issue:
https://github.com/apache/flink/pull/4149
Sounds fair. @aljoscha @tzulitai Any recommendations on what particular
test and where I should put the tests in? I'll also improve the documentation
as well.
---
If your project is set up for
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4149
+1 agree with @aljoscha here.
The main problem with this change is that there is no usages of it in other
parts of the codebase, and can very easily be removed "accidentally" in the
future.
Github user aljoscha commented on the issue:
https://github.com/apache/flink/pull/4149
Just a quick comment: without a test or a very explicit comment about why
these fields are there they might get "fixed away" in the future by someone who
notices that there are unused fields.
---
Github user zhenzhongxu commented on the issue:
https://github.com/apache/flink/pull/4149
Hi @tzulitai, in this particular case, we actually disabled Flink
checkpointing (because we do not want to rely on fixed interval barrier to
trigger sink flush/offset commit). As a workaround,
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4149
@zhenzhongxu if I understood you correctly, instead of this solution, would
it then make sense for your case to make the Kafka offset committing happen
only when the checkpoint is completed, not
Github user zhenzhongxu commented on the issue:
https://github.com/apache/flink/pull/4149
Hi @tzulitai. Yes, we do have a use case where we need to disable Flink
checkpointing because the time interval checkpointing model does not work with
our constraints. We had to trigger Kafka
Github user tzulitai commented on the issue:
https://github.com/apache/flink/pull/4149
@zhenzhongxu could you explain a bit why you need to expose these two
values? What are you trying to achieve in your custom fetcher?
---
If your project is set up for it, you can reply to this
10 matches
Mail list logo