Thanks, it is working properly now.
NB : Had to delete the folder by code because Hadoop’s OuputFormats will only 
overwrite file by file, not the whole folder.

From: Fabian Hueske [mailto:fhue...@gmail.com]
Sent: mardi 20 décembre 2016 14:21
To: user@flink.apache.org
Subject: Re: Generate _SUCCESS (map-reduce style) when folder has been written

Hi Gwenhael,
The _SUCCESS files were originally generated by Hadoop for successful jobs. 
AFAIK, Spark leverages Hadoop's Input and OutputFormats and seems to have 
followed this approach as well to be compatible.
You could use Flink's HadoopOutputFormat which is a wrapper for Hadoop 
OutputFormats (both mapred and mapreduce APIs).
The wrapper does also produce the _SUCCESS files. In fact, you might be able to 
use exactly the same OutputFormat as your Spark job.
Best,
Fabian

2016-12-20 14:00 GMT+01:00 Gwenhael Pasquiers 
<gwenhael.pasqui...@ericsson.com<mailto:gwenhael.pasqui...@ericsson.com>>:
Hi,

Sorry if it’s already been asked but is there an embedded way for flink to 
generate a _SUCCESS file in the folders it’s been writing into (using the write 
method with OutputFormat) ?

We are replacing a spark job that was generating those files (and further 
operations rely on it).

Best regards,

Gwenhaël PASQUIERS

Reply via email to