+1
Specifically, p.run().waitUntilFinish() would throw an exception if there
were errors during pipeline execution.
On Wed, Nov 8, 2023 at 8:05 AM John Casey via dev
wrote:
> Yep, thats a common misunderstanding with beam.
>
> The code that is actually executed in the try block is just for pipe
Yep, thats a common misunderstanding with beam.
The code that is actually executed in the try block is just for pipeline
construction, and no data is processed at this point in time.
Once the pipeline is constructed, the various pardos are serialized, and
sent to the runners, where they are actua
Hey John,
Yes that's how my code is set up, I have the FileIO.write() in its own
try-catch block. I took a second look at where exactly the code is failing,
and it's actually in a ParDo function which is happening before I call
FileIO.write(). But even within that, I've tried adding a try-catch bu
There are 2 execution times when using Beam. The first execution is local,
when a pipeline is constructed, and the second is remote on the runner,
processing data.
Based on what you said, it sounds like you are wrapping pipeline
construction in a try-catch, and constructing FileIO isn't failing.
File write failures should be throwing exceptions that will terminate the
pipeline on failure. (Generally a distributed runner will make multiple
attempts before abandoning the entire pipeline of course.)
Are you seeing files failing to be written but no exceptions being thrown?
If so, this is def
Hello,
I am a developer using Apache Beam in my Java application, and I need some
help on how to handle exceptions when writing a file to S3. I have tried
wrapping my code within a try-catch block, but no exception is being thrown
within the try block. I'm assuming that FileIO doesn't throw any ex