damccorm commented on issue #21437: URL: https://github.com/apache/beam/issues/21437#issuecomment-1247087175
Actually, I think there could be value in adding `num_failed_inferences` before worrying about the side input model loading stuff. That would probably involve putting some error handling around our run_inference call: https://github.com/apache/beam/blob/2d4f61c93250fb41112a2535394e7328bc1fdf95/sdks/python/apache_beam/ml/inference/base.py#L418 and have that update the metrics count. @BjornPrime as you free up would you mind working on this? We probably can't totally close out the issue, but we can make a dent in it. cc/ @yeandy -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
