HI Mihail,

Thank you for your question. Do you have a short example that reproduces
the problem? It is hard to find the cause without an error message or some
example code.

I wonder how your loop works without WriteMode.OVERWRITE because it should
throw an exception in this case. Or do you change the file names on every
write?

Cheers,
Max

On Tue, Jun 30, 2015 at 3:47 PM, Mihail Vieru <vi...@informatik.hu-berlin.de
> wrote:

>  I think my problem is related to a loop in my job.
>
> Before the loop, the writeAsCsv method works fine, even in overwrite mode.
>
> In the loop, in the first iteration, it writes an empty folder containing
> empty files to HDFS. Even though the DataSet it is supposed to write
> contains elements.
>
> Needless to say, this doesn't occur in a local execution environment, when
> writing to the local file system.
>
>
> I would appreciate any input on this.
>
> Best,
> Mihail
>
>
>
> On 30.06.2015 12:10, Mihail Vieru wrote:
>
> Hi Till,
>
> thank you for your reply.
>
> I have the following code snippet:
>
> *intermediateGraph.getVertices().writeAsCsv(tempGraphOutputPath, "\n",
> ";", WriteMode.OVERWRITE);*
>
> When I remove the WriteMode parameter, it works. So I can reason that the
> DataSet contains data elements.
>
> Cheers,
> Mihail
>
>
> On 30.06.2015 12:06, Till Rohrmann wrote:
>
>  Hi Mihail,
>
> have you checked that the DataSet you want to write to HDFS actually
> contains data elements? You can try calling collect which retrieves the
> data to your client to see what’s in there.
>
> Cheers,
> Till
> ​
>
> On Tue, Jun 30, 2015 at 12:01 PM, Mihail Vieru <
> vi...@informatik.hu-berlin.de> wrote:
>
>> Hi,
>>
>> the writeAsCsv method is not writing anything to HDFS (version 1.2.1)
>> when the WriteMode is set to OVERWRITE.
>> A file is created but it's empty. And no trace of errors in the Flink or
>> Hadoop logs on all nodes in the cluster.
>>
>> What could cause this issue? I really really need this feature..
>>
>> Best,
>> Mihail
>>
>
>
>
>

Reply via email to