ryanthompson591 commented on code in PR #23830:
URL: https://github.com/apache/beam/pull/23830#discussion_r1004886767


##########
sdks/python/apache_beam/ml/inference/base_test.py:
##########
@@ -171,6 +171,38 @@ def test_unexpected_inference_args_passed(self):
             FakeModelHandlerFailsOnInferenceArgs(),
             inference_args=inference_args)
 
+  def test_increment_failed_batches_counter(self):
+    with self.assertRaises(ValueError, FakeModelHandlerFailsOnInferenceArgs):
+      with TestPipeline() as pipeline:
+        examples = [1, 5, 3, 10]
+        pcoll = pipeline | 'start' >> beam.Create(examples)
+        inference_args = {'key': True}

Review Comment:
   I don't think inference args are important to the test. I would just remove 
unless it is needed.



##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -426,17 +427,22 @@ def setup(self):
 
   def process(self, batch, inference_args):
     start_time = _to_microseconds(self._clock.time_ns())
-    result_generator = self._model_handler.run_inference(
+    try:
+      result_generator = self._model_handler.run_inference(
         batch, self._model, inference_args)
-    predictions = list(result_generator)
+    except BaseException:
+      self._metrics_collector.failed_batches_counter.inc()
+      raise

Review Comment:
   I often prefer 
   ```
   except BaseException as e:
     ...
     raise e
   ```
   
   that way the same exception should be raised.
   



##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -337,6 +337,7 @@ def __init__(self, namespace: str):
     # Metrics
     self._inference_counter = beam.metrics.Metrics.counter(
         namespace, 'num_inferences')
+    self.failed_batches_counter = 0

Review Comment:
   yes, you should follow the pattern that all the other metrics are using.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to