This is an automated email from the ASF dual-hosted git repository.

echauchot pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ab5f25f4db8b3881a411fb9d51c0ae35ea184708
Author: Etienne Chauchot <echauc...@apache.org>
AuthorDate: Thu May 11 17:19:22 2023 +0200

    [FLINK-31749][hotfix][doc] Remove unavailable HadoopOutputFormat for 
DataStream
---
 .../docs/connectors/datastream/formats/hadoop.md   | 50 --------------------
 .../docs/connectors/datastream/formats/hadoop.md   | 53 ----------------------
 2 files changed, 103 deletions(-)

diff --git a/docs/content.zh/docs/connectors/datastream/formats/hadoop.md 
b/docs/content.zh/docs/connectors/datastream/formats/hadoop.md
index 20f0d767efc..0be4e18d75f 100644
--- a/docs/content.zh/docs/connectors/datastream/formats/hadoop.md
+++ b/docs/content.zh/docs/connectors/datastream/formats/hadoop.md
@@ -94,54 +94,4 @@ val input: DataStream[(LongWritable, Text)] =
 {{< /tab >}}
 {{< /tabs >}}
 
-## Using Hadoop OutputFormats
-
-Flink 为 Hadoop `OutputFormats` 提供了一个兼容性包装器。支持任何实现 
`org.apache.hadoop.mapred.OutputFormat` 或扩展 
`org.apache.hadoop.mapreduce.OutputFormat` 的类。
-`OutputFormat` 包装器期望其输入数据是包含键和值的 2-元组的 DataSet。这些将由 Hadoop `OutputFormat` 处理。
-
-下面的示例展示了如何使用 Hadoop 的 `TextOutputFormat`。
-
-{{< tabs "d4af1c52-0e4c-490c-8c35-e3d60b1b52ee" >}}
-{{< tab "Java" >}}
-
-```java
-// 获取我们希望发送的结果
-DataStream<Tuple2<Text, IntWritable>> hadoopResult = [...];
-
-// 设置 the Hadoop TextOutputFormat。
-HadoopOutputFormat<Text, IntWritable> hadoopOF =
-  // 创建 Flink wrapper.
-  new HadoopOutputFormat<Text, IntWritable>(
-    // 设置 Hadoop OutputFormat 并指定 job。
-    new TextOutputFormat<Text, IntWritable>(), job
-  );
-hadoopOF.getConfiguration().set("mapreduce.output.textoutputformat.separator", 
" ");
-TextOutputFormat.setOutputPath(job, new Path(outputPath));
-
-// 使用 Hadoop TextOutputFormat 发送数据。
-hadoopResult.output(hadoopOF);
-```
-
-{{< /tab >}}
-{{< tab "Scala" >}}
-
-```scala
-// 获取我们希望发送的结果
-val hadoopResult: DataStream[(Text, IntWritable)] = [...]
-
-val hadoopOF = new HadoopOutputFormat[Text,IntWritable](
-  new TextOutputFormat[Text, IntWritable],
-  new JobConf)
-
-hadoopOF.getJobConf.set("mapred.textoutputformat.separator", " ")
-FileOutputFormat.setOutputPath(hadoopOF.getJobConf, new Path(resultPath))
-
-hadoopResult.output(hadoopOF)
-
-
-```
-
-{{< /tab >}}
-{{< /tabs >}}
-
 {{< top >}}
diff --git a/docs/content/docs/connectors/datastream/formats/hadoop.md 
b/docs/content/docs/connectors/datastream/formats/hadoop.md
index edb95edde0d..86637f3cf5a 100644
--- a/docs/content/docs/connectors/datastream/formats/hadoop.md
+++ b/docs/content/docs/connectors/datastream/formats/hadoop.md
@@ -103,57 +103,4 @@ val input: DataStream[(LongWritable, Text)] =
 {{< /tab >}}
 {{< /tabs >}}
 
-## Using Hadoop OutputFormats
-
-Flink provides a compatibility wrapper for Hadoop `OutputFormats`. Any class
-that implements `org.apache.hadoop.mapred.OutputFormat` or extends
-`org.apache.hadoop.mapreduce.OutputFormat` is supported.
-The OutputFormat wrapper expects its input data to be a DataSet containing
-2-tuples of key and value. These are to be processed by the Hadoop 
OutputFormat.
-
-The following example shows how to use Hadoop's `TextOutputFormat`.
-
-{{< tabs "d4af1c52-0e4c-490c-8c35-e3d60b1b52ee" >}}
-{{< tab "Java" >}}
-
-```java
-// Obtain the result we want to emit
-DataStream<Tuple2<Text, IntWritable>> hadoopResult = [...]
-
-// Set up the Hadoop TextOutputFormat.
-HadoopOutputFormat<Text, IntWritable> hadoopOF =
-  // create the Flink wrapper.
-  new HadoopOutputFormat<Text, IntWritable>(
-    // set the Hadoop OutputFormat and specify the job.
-    new TextOutputFormat<Text, IntWritable>(), job
-  );
-hadoopOF.getConfiguration().set("mapreduce.output.textoutputformat.separator", 
" ");
-TextOutputFormat.setOutputPath(job, new Path(outputPath));
-
-// Emit data using the Hadoop TextOutputFormat.
-hadoopResult.output(hadoopOF);
-```
-
-{{< /tab >}}
-{{< tab "Scala" >}}
-
-```scala
-// Obtain your result to emit.
-val hadoopResult: DataStream[(Text, IntWritable)] = [...]
-
-val hadoopOF = new HadoopOutputFormat[Text,IntWritable](
-  new TextOutputFormat[Text, IntWritable],
-  new JobConf)
-
-hadoopOF.getJobConf.set("mapred.textoutputformat.separator", " ")
-FileOutputFormat.setOutputPath(hadoopOF.getJobConf, new Path(resultPath))
-
-hadoopResult.output(hadoopOF)
-
-
-```
-
-{{< /tab >}}
-{{< /tabs >}}
-
 {{< top >}}

Reply via email to