cozos commented on issue #22986:
URL: https://github.com/apache/beam/issues/22986#issuecomment-1235500077

   What do you think about doing something like:
   
   * Add `max_failed_rows` to `WriteToBigQuery`
   * Catch bad row exceptions when writing to temp Avro file
   * If JSON, set max_failed_rows to LoadJob config
   * Put bad rows in a `FailedRows` output tag
   
   If this is too much, are there any workarounds for users?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to