Hi,

It throws this exception because the memory segment provided by Flink is 
depleted.

How do you implement your writing logic? It seems that you are using apache 
beam. What is the version of Flink engine? Can you provide more logs so we can 
know why the memory segment depletion happened. 

I guess one of the reason may be the writing rate is too fast so maybe you can 
solved it by decreasing the writing rate.


> On 9 May 2022, at 6:20 AM, sambhav gupta <competitivesamb...@gmail.com> wrote:
> 
> Hi Flink User Group,
> 
> I am a data engineer from Thoughtworks and faced an issue recently when 
> running some flink code . the same code ran fine when run with smaller file 
> but on increasing the file size it gave this error
> Caused by: java.io.EOFException
>       at 
> org.apache.flink.runtime.io.disk.SimpleCollectingOutputView.nextSegment(SimpleCollectingOutputView.java:79)
>       at 
> org.apache.flink.runtime.memory.AbstractPagedOutputView.advance(AbstractPagedOutputView.java:140)
>       at 
> org.apache.flink.runtime.memory.AbstractPagedOutputView.write(AbstractPagedOutputView.java:190)
>       at 
> org.apache.beam.runners.flink.translation.wrappers.DataOutputViewWrapper.write(DataOutputViewWrapper.java:49)
>       at 
> java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
>       at 
> java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786
> 
> Somebody mentioned on stackoverflow that this could be a network buffer issue 
>  but Shouldn't the buffers be spilling to disk and making sure the code 
> doesn't break even it takes more time with  larger file?
> 
> Can somebody help me with this?
> 
> Thanks,
> Sambhav Gupta
> (DATA ENGINEER)
> Thoughtworks

Reply via email to