Appreciate the super clear summary of the different benchmark experiments!
This will add lots of value to potential users, especially when we
integrate GPU benchmarks. Thanks Anand!

Best,
Andy

On Thu, Aug 18, 2022 at 10:22 AM Danny McCormick via dev <
dev@beam.apache.org> wrote:

> I left a few comments, but overall this sounds like a good plan to me -
> thanks for the writeup!
>
> On Tue, Aug 16, 2022 at 9:36 AM Anand Inguva via dev <dev@beam.apache.org>
> wrote:
>
>> Hi,
>>
>> I created a doc
>> <https://docs.google.com/document/d/1xmh9D_904H-6X19Mi0-tDACwCCMvP4_MFA9QT0TOym8/edit#>[1]
>> which outlines the plan for the RunInference API[2] benchmark/performance
>> tests. I would appreciate feedback on the following,
>>
>>    - Models used for the benchmark tests.
>>    - Metrics calculated as part of the benchmark tests.
>>
>>
>> If you have any inputs or any suggestions on additional metrics/models
>> that would be helpful for the Beam ML community as part of the benchmark
>> tests, please let us know.
>>
>> [1]
>> https://docs.google.com/document/d/1xmh9D_904H-6X19Mi0-tDACwCCMvP4_MFA9QT0TOym8/edit#
>> [2]
>>  
>> https://github.com/apache/beam/blob/67cb87ecc2d01b88f8620ed6821bcf71376d9849/sdks/python/apache_beam/ml/inference/base.py#L269
>> <https://github.com/apache/beam/blob/67cb87ecc2d01b88f8620ed6821bcf71376d9849/sdks/python/apache_beam/ml/inference/base.py#L269>
>>
>>
>> Thanks,
>> Anand
>>
>

Reply via email to