yeandy commented on issue #21437: URL: https://github.com/apache/beam/issues/21437#issuecomment-1271521797
I like this proposal, as it makes it more flexible. > it would increment _num_failed_inferences twice To be clear, there are two types: `Metrics.counter()` and `Metrics.distribution`, where we "increment" the `self._inference_counter` and "update" the distributions of `self._inference_batch_latency_micro_secs`, `self._inference_request_batch_size`, `self._inference_request_batch_byte_size`. So we should also have something like `increment_metrics()` for `_inference_counter` and `_failed_inference_counter` (for `_num_failed_inferences`). > I think we do need to consider how a failure affects the other metrics though. For example, the inference counter basically just adds the number of items in the batch after the batch finishes, but if the batch is interrupted by failures, that number won't be accurate will it? Or maybe I'm misunderstanding what's going on under the hood of run_inference. That's the right interpretation. See my comment https://github.com/apache/beam/issues/21437#issuecomment-1268707869. I think the only thing we would want to update is `_num_failed_inferences` with the number of elements of that batch that did fail. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
