[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-24 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4321 Thanks @tzulitai --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so,

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-21 Thread StephanEwen
Github user StephanEwen commented on the issue: https://github.com/apache/flink/pull/4321 Fair enough +1 then --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-21 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4321 Dropping this field would make it more error prone in the future if anyone would call `reassingPartitions()` from somewhere else. For me `hasAssignedPartitions` is tightly related to the state of

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-20 Thread StephanEwen
Github user StephanEwen commented on the issue: https://github.com/apache/flink/pull/4321 This patch looks good. As a minor comment: I would prefer to not have `hasAssignedPartitions` as a field, but rather return it from the `reassignPartitions()` method and have it only as

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-20 Thread StephanEwen
Github user StephanEwen commented on the issue: https://github.com/apache/flink/pull/4321 I think the new pull request description template would have been awesome here ;-) --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-20 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4321 Looks good now, +1 on my side. Lets also wait a bit for @StephanEwen to see if he has any more comments regarding the use of an extra `hasAssignedPartitions` field (since he commented on that

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-19 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4321 I have also squashed previous fixups - there is only a new one. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-19 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4321 I have added unit test for closing. I think it should be triggered/tested in one of the `ITCases`, but test is fairly easy so it shouldn't hurt us to have this tested explicitly. --- If your

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-19 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4321 -- IMO begin Mockito tests tends to repeat the implementation. Instead of testing for the effect, they tend to do the same thing as the actual code but in backwards. In other words, they have

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-18 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4321 One other question: I need a bit more context on why the version bump requires that change in the `KafkaConsumerThread`. From what I perceive, that should be an separate issue to fix hot looping,

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-18 Thread tzulitai
Github user tzulitai commented on the issue: https://github.com/apache/flink/pull/4321 @pnowojski we can't just drop that test, IMO. It's crucial that those tests exist to guard against incorrect reassignment logic in the `KafkaConsumerThread`. Breaking that would mess up the

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-14 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4321 @tzulitai could you look at this PR and particularly into last commit (fixup). I'm not a big fan of mocks and mockito based tests and I would really be inclined to just drop this test. --- If

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-13 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4321 Hmm, will blocking operation be appropriate here? This would prevent `shutdown()` from actually breaking the loop. I think we would need some timeout here? --- If your project is set up for it,

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-13 Thread pnowojski
Github user pnowojski commented on the issue: https://github.com/apache/flink/pull/4321 Good catch with with this spinning, I missed that. Checking per each iteration for assigned partitions is unfortunately costly, because there is no cheap `isEmpty()` method. The one that I

[GitHub] flink issue #4321: [FLINK-7174] Bump Kafka 0.10 dependency to 0.10.2.1

2017-07-13 Thread StephanEwen
Github user StephanEwen commented on the issue: https://github.com/apache/flink/pull/4321 I think this pull request will make the Kafka consumer go into a hot busy waiting loop when it has no partitions assigned. I would suggest to do a blocking `take()` or so on the