[
https://issues.apache.org/jira/browse/FLINK-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433508#comment-15433508
]
ASF GitHub Bot commented on FLINK-3874:
---------------------------------------
Github user mushketyk commented on a diff in the pull request:
https://github.com/apache/flink/pull/2244#discussion_r75938953
--- Diff:
flink-streaming-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/KafkaTableSink.java
---
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.streaming.connectors.kafka;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.table.Row;
+import org.apache.flink.api.table.sinks.StreamTableSink;
+import org.apache.flink.api.table.typeutils.RowTypeInfo;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner;
+import org.apache.flink.streaming.util.serialization.SerializationSchema;
+import org.apache.flink.util.Preconditions;
+
+import java.util.Properties;
+
+/**
+ * A version-agnostic Kafka {@link StreamTableSink}.
+ *
+ * <p>The version-specific Kafka consumers need to extend this class and
+ * override {@link #createKafkaProducer(String, Properties,
SerializationSchema, KafkaPartitioner)}}.
+ */
+public abstract class KafkaTableSink implements StreamTableSink<Row> {
+
+ private final String topic;
+ private final Properties properties;
+ private SerializationSchema<Row> serializationSchema;
+ private final KafkaPartitioner<Row> partitioner;
+ private String[] fieldNames;
+ private TypeInformation[] fieldTypes;
+ /**
+ * Creates KafkaTableSink
+ *
+ * @param topic Kafka topic to consume.
+ * @param properties Properties for the Kafka consumer.
+ * @param partitioner Partitioner to select Kafka partition
for each item
+ */
+ public KafkaTableSink(
+ String topic,
+ Properties properties,
+ KafkaPartitioner<Row> partitioner) {
+
+ this.topic = Preconditions.checkNotNull(topic, "topic");
+ this.properties = Preconditions.checkNotNull(properties,
"properties");
+ this.partitioner = Preconditions.checkNotNull(partitioner,
"partitioner");
+ }
+
+ /**
+ * Returns the version-specifid Kafka producer.
+ *
+ * @param topic Kafka topic to produce to.
+ * @param properties Properties for the Kafka producer.
+ * @param serializationSchema Serialization schema to use to create
Kafka records.
+ * @param partitioner Partitioner to select Kafka partition.
+ * @return The version-specific Kafka producer
+ */
+ protected abstract FlinkKafkaProducerBase<Row> createKafkaProducer(
+ String topic, Properties properties,
+ SerializationSchema<Row> serializationSchema,
+ KafkaPartitioner<Row> partitioner);
+
+ protected abstract SerializationSchema<Row>
createSerializationSchema(String[] fieldNames);
+
+ @Override
+ public void emitDataStream(DataStream<Row> dataStream) {
+ FlinkKafkaProducerBase<Row> kafkaProducer =
createKafkaProducer(topic, properties, serializationSchema, partitioner);
+ dataStream.addSink(kafkaProducer);
+ }
+
+ @Override
+ public TypeInformation<Row> getOutputType() {
+ return new RowTypeInfo(getFieldTypes());
+ }
+
+ public String[] getFieldNames() {
+ return fieldNames;
+ }
+
+ @Override
+ public TypeInformation<?>[] getFieldTypes() {
+ return fieldTypes;
+ }
+
+ @Override
+ public KafkaTableSink configure(String[] fieldNames,
TypeInformation<?>[] fieldTypes) {
+ this.fieldNames = Preconditions.checkNotNull(fieldNames,
"fieldNames");
+ this.fieldTypes = Preconditions.checkNotNull(fieldTypes,
"fieldTypes");
+ Preconditions.checkArgument(fieldNames.length ==
fieldTypes.length,
+ "Number of provided field names and types does not
match.");
+ this.serializationSchema =
createSerializationSchema(fieldNames);
+
+ return this;
--- End diff --
Fixed.
> Add a Kafka TableSink with JSON serialization
> ---------------------------------------------
>
> Key: FLINK-3874
> URL: https://issues.apache.org/jira/browse/FLINK-3874
> Project: Flink
> Issue Type: New Feature
> Components: Table API & SQL
> Reporter: Fabian Hueske
> Assignee: Ivan Mushketyk
> Priority: Minor
>
> Add a TableSink that writes JSON serialized data to Kafka.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)