those.
Thanks, Arijit
n Spark should check is "dfs.support.append".
I believe failure is intermittent since in most cases a new file is created to
store the block addition event. I need to look into the code again to see when
these files are created new and when they are appended.
Thanks, Arijit
___
(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
We tried with increasing the timeout to 60 seconds but could not eliminate the
issue completely. Requesting suggestions on what would be the recourse to stop
this data bleeding.
Thanks, Arijit
s? I am not familiar with the code yet but is it
possible to generate a new file whenever conflict of this sort happens?
Thanks again, Arijit
From: Tathagata Das <tathagata.das1...@gmail.com>
Sent: Monday, November 7, 2016 7:59:06 PM
To: Arijit
Cc: user@sp
2316 (size: 283.1 KB, free: 2.6 GB)
I am sure Spark Streaming is not expected to lose data when WAL is enabled. So
what are we doing wrong here?
Thanks, Arijit