Hi Morven,
You posted the same question a few days ago and it was also answered
correctly.
Please do not repost the same question again.
You can reply to the earlier thread if you have a follow up question.
To answer your question briefly:
No, Flink does not trigger a MapReduce job.
The whole job
Hi,
I’d like to sink my data into hdfs using SequenceFileAsBinaryOutputFormat with
compression, and I find a way from the link
https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/hadoop_compatibility.html,
the code works, but I’m curious to know, since it creates a mapreduce Job
Hi Fabian,
Thank you for the clarification.
Best,
Morven Huang
On Wed, Apr 10, 2019 at 9:57 PM Fabian Hueske wrote:
> Hi,
>
> Flink's Hadoop compatibility functions just wrap functions that were
> implemented against Hadoop's interfaces in wrapper functions that are
> implemented against Flink
Hi,
Flink's Hadoop compatibility functions just wrap functions that were
implemented against Hadoop's interfaces in wrapper functions that are
implemented against Flink's interfaces.
There is no Hadoop cluster started or MapReduce job being executed.
Job is just a class of the Hadoop API. It does
Hi,
I’d like to sink my data into hdfs using SequenceFileAsBinaryOutputFormat
with compression, and I find a way from the link
https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/hadoop_compatibility.html,
the code works, but I’m curious to know, since it creates a mapreduce Job
ins