[GitHub] flink issue #4607: [FLINK-6306][connectors] Sink for eventually consistent f...
Github user stevenzwu commented on the issue: https://github.com/apache/flink/pull/4607 @aljoscha is there any doc/write-up about the reworking of BucketingSink? ---
[GitHub] flink pull request #4357: (release-1.3) [FLINK-7143, FLINK-7195] Collection ...
Github user stevenzwu commented on a diff in the pull request: https://github.com/apache/flink/pull/4357#discussion_r128642917 --- Diff: flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java --- @@ -517,16 +519,13 @@ public void initializeState(FunctionInitializationContext context) throws Except LOG.debug("Using the following offsets: {}", restoredState); } } - if (restoredState != null && restoredState.isEmpty()) { - restoredState = null; - } } else { LOG.info("No restore state for FlinkKafkaConsumer."); } } @Override - public void snapshotState(FunctionSnapshotContext context) throws Exception { + public final void snapshotState(FunctionSnapshotContext context) throws Exception { --- End diff -- ``` the verrsion-specific implementations for FlinkKafkaConsumerBase may override that and have incorrect implementations, where as our tests would never realize it. ``` @tzulitai why would this be a concern for FlinkKafkaConsumerBase. if version-specific implementations have bugs, they should have test to catch and prevent bugs. We do need the capability to override the snapshot method to no-op. what would be your suggested alternative? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #4357: (release-1.3) [FLINK-7143, FLINK-7195] Collection ...
Github user stevenzwu commented on a diff in the pull request: https://github.com/apache/flink/pull/4357#discussion_r128642186 --- Diff: flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java --- @@ -517,16 +519,13 @@ public void initializeState(FunctionInitializationContext context) throws Except LOG.debug("Using the following offsets: {}", restoredState); } } - if (restoredState != null && restoredState.isEmpty()) { - restoredState = null; - } } else { LOG.info("No restore state for FlinkKafkaConsumer."); } } @Override - public void snapshotState(FunctionSnapshotContext context) throws Exception { + public final void snapshotState(FunctionSnapshotContext context) throws Exception { --- End diff -- @tzulitai looks like the behavior was changed/fix in 1.3. Here is the Kafka09Fetcher.java code from 1.2 that was causing the behavior I described earlier. {code} // if checkpointing is enabled, we are not automatically committing to Kafka. kafkaProperties.setProperty( ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, Boolean.toString(!enableCheckpointing)); {code} --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #4357: (release-1.3) [FLINK-7143, FLINK-7195] Collection ...
Github user stevenzwu commented on a diff in the pull request: https://github.com/apache/flink/pull/4357#discussion_r128113797 --- Diff: flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.java --- @@ -517,16 +519,13 @@ public void initializeState(FunctionInitializationContext context) throws Except LOG.debug("Using the following offsets: {}", restoredState); } } - if (restoredState != null && restoredState.isEmpty()) { - restoredState = null; - } } else { LOG.info("No restore state for FlinkKafkaConsumer."); } } @Override - public void snapshotState(FunctionSnapshotContext context) throws Exception { + public final void snapshotState(FunctionSnapshotContext context) throws Exception { --- End diff -- @tzulitai what's the reason to make this final? In our router use case, we override the snapshotState method to no-op. We disabled Flink checkpoint by setting checkpoint interval to Long.MAX_VALUE. we can't set Flink checkpoint to false, because otherwise Kafka consumer auto.commit will be hard-coded to true. @zhenzhongxu ^ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---