[
https://issues.apache.org/jira/browse/FLINK-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15525277#comment-15525277
]
ASF GitHub Bot commented on FLINK-4035:
---------------------------------------
Github user tzulitai commented on a diff in the pull request:
https://github.com/apache/flink/pull/2369#discussion_r80629075
--- Diff:
flink-streaming-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer010.java
---
@@ -83,11 +83,11 @@
* @param serializationSchema User defined serialization schema
supporting key/value messages
* @param producerConfig Properties with the producer configuration.
*/
- public static <T> FlinkKafkaProducer010Configuration
writeToKafka(DataStream<T> inStream,
-
String topicId,
-
KeyedSerializationSchema<T> serializationSchema,
-
Properties producerConfig) {
- return writeToKafka(inStream, topicId, serializationSchema,
producerConfig, new FixedPartitioner<T>());
+ public static <T> FlinkKafkaProducer010Configuration
writeToKafkaWithTimestamps(DataStream<T> inStream,
--- End diff --
Add the generic type parameter `T` to `FlinkKafkaProducer010Configuration`
here too?
> Bump Kafka producer in Kafka sink to Kafka 0.10.0.0
> ---------------------------------------------------
>
> Key: FLINK-4035
> URL: https://issues.apache.org/jira/browse/FLINK-4035
> Project: Flink
> Issue Type: Bug
> Components: Kafka Connector
> Affects Versions: 1.0.3
> Reporter: Elias Levy
> Assignee: Robert Metzger
> Priority: Minor
>
> Kafka 0.10.0.0 introduced protocol changes related to the producer.
> Published messages now include timestamps and compressed messages now include
> relative offsets. As it is now, brokers must decompress publisher compressed
> messages, assign offset to them, and recompress them, which is wasteful and
> makes it less likely that compression will be used at all.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)