[
https://issues.apache.org/jira/browse/FLINK-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432561#comment-15432561
]
ASF GitHub Bot commented on FLINK-3874:
---------------------------------------
Github user fhueske commented on a diff in the pull request:
https://github.com/apache/flink/pull/2244#discussion_r75836831
--- Diff:
flink-streaming-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTableSinkTestBase.java
---
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.streaming.connectors.kafka;
+
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.api.common.restartstrategy.RestartStrategies;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.table.Row;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.SinkFunction;
+import org.apache.flink.streaming.api.functions.source.SourceFunction;
+import org.apache.flink.streaming.connectors.kafka.internals.TypeUtil;
+import
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner;
+import org.apache.flink.streaming.util.serialization.DeserializationSchema;
+import org.apache.flink.test.util.SuccessException;
+
+import java.io.Serializable;
+import java.util.HashSet;
+import java.util.Properties;
+
+import static org.apache.flink.test.util.TestUtils.tryExecute;
+
+public abstract class KafkaTableSinkTestBase extends KafkaTestBase
implements Serializable {
+
+ protected final static String TOPIC = "customPartitioningTestTopic";
+ protected final static int PARALLELISM = 1;
+ protected final static String[] FIELD_NAMES = new String[] {"field1",
"field2"};
+ protected final static TypeInformation[] FIELD_TYPES =
TypeUtil.toTypeInfo(new Class[] {Integer.class, String.class});
+
+ public void testKafkaTableSink() throws Exception {
--- End diff --
If you add a `@Test` annotation here, we do not need the test methods in
the classes that extend this TestBase.
> Add a Kafka TableSink with JSON serialization
> ---------------------------------------------
>
> Key: FLINK-3874
> URL: https://issues.apache.org/jira/browse/FLINK-3874
> Project: Flink
> Issue Type: New Feature
> Components: Table API & SQL
> Reporter: Fabian Hueske
> Assignee: Ivan Mushketyk
> Priority: Minor
>
> Add a TableSink that writes JSON serialized data to Kafka.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)