[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-07-25 Thread zhenzhongxu
Github user zhenzhongxu commented on the issue: https://github.com/apache/flink/pull/4149 thanks @aljoscha for the suggestion. KeyedDeserializationSchema seems like a better approach. I'll close this PR. --- If your project is set up for it, you can reply to this email and have your

[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-07-21 Thread aljoscha
Github user aljoscha commented on the issue: https://github.com/apache/flink/pull/4149 @zhenzhongxu Have you looked into `KeyedDeserializationSchema`? In there you have access to the partition and the offset at which the record originated. --- If your project is set up for it, you

[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-07-19 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4149 @zhenzhongxu at the very least, I would expose these new values as protected methods, and let the fields be private. Then, have particular tests that verify the values are correct. The

[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-07-17 Thread zhenzhongxu
Github user zhenzhongxu commented on the issue: https://github.com/apache/flink/pull/4149 Sounds fair. @aljoscha @tzulitai Any recommendations on what particular test and where I should put the tests in? I'll also improve the documentation as well. --- If your project is set up for

[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-07-17 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4149 +1 agree with @aljoscha here. The main problem with this change is that there is no usages of it in other parts of the codebase, and can very easily be removed "accidentally" in the future.

[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-07-13 Thread aljoscha
Github user aljoscha commented on the issue: https://github.com/apache/flink/pull/4149 Just a quick comment: without a test or a very explicit comment about why these fields are there they might get "fixed away" in the future by someone who notices that there are unused fields. ---

[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-06-22 Thread zhenzhongxu
Github user zhenzhongxu commented on the issue: https://github.com/apache/flink/pull/4149 Hi @tzulitai, in this particular case, we actually disabled Flink checkpointing (because we do not want to rely on fixed interval barrier to trigger sink flush/offset commit). As a workaround,

[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-06-22 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4149 @zhenzhongxu if I understood you correctly, instead of this solution, would it then make sense for your case to make the Kafka offset committing happen only when the checkpoint is completed, not

[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-06-21 Thread zhenzhongxu
Github user zhenzhongxu commented on the issue: https://github.com/apache/flink/pull/4149 Hi @tzulitai. Yes, we do have a use case where we need to disable Flink checkpointing because the time interval checkpointing model does not work with our constraints. We had to trigger Kafka

[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-06-20 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4149 @zhenzhongxu could you explain a bit why you need to expose these two values? What are you trying to achieve in your custom fetcher? --- If your project is set up for it, you can reply to this