Hi,

I created a doc
<https://docs.google.com/document/d/1xmh9D_904H-6X19Mi0-tDACwCCMvP4_MFA9QT0TOym8/edit#>[1]
which outlines the plan for the RunInference API[2] benchmark/performance
tests. I would appreciate feedback on the following,

   - Models used for the benchmark tests.
   - Metrics calculated as part of the benchmark tests.


If you have any inputs or any suggestions on additional metrics/models that
would be helpful for the Beam ML community as part of the benchmark tests,
please let us know.

[1]
https://docs.google.com/document/d/1xmh9D_904H-6X19Mi0-tDACwCCMvP4_MFA9QT0TOym8/edit#
[2]
 
https://github.com/apache/beam/blob/67cb87ecc2d01b88f8620ed6821bcf71376d9849/sdks/python/apache_beam/ml/inference/base.py#L269
<https://github.com/apache/beam/blob/67cb87ecc2d01b88f8620ed6821bcf71376d9849/sdks/python/apache_beam/ml/inference/base.py#L269>


Thanks,
Anand

Reply via email to