damccorm opened a new issue, #24209:
URL: https://github.com/apache/beam/issues/24209

   ### What needs to happen?
   
   Right now, if RunInference fails a batch inference, it fails the whole 
transform. For batch pipelines, this means failing the pipeline on 
non-retryable failures (which represent most inference failures), for streaming 
it means infinite retries and a stuck pipeline.
   
   We should handle failures by passing them to the next step as part of the 
`PredictionResult` object instead so that users can perform custom error 
handling. We should also document this behavior in the PyDoc and on our website.
   
   ### Issue Priority
   
   Priority: 2
   
   ### Issue Component
   
   Component: run-inference


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to