Your question isn't for me, but I just want to say that I am really happy
to hear you are doing this. I would like to get more continuous
benchmarking so we can reduce any overheads Beam might introduce, for
example on Samza in your case. And I would like to basically focus entirely
on portable mode. (I do think that a runner like Samza could execute Java
DoFns in classic style and Python in portable all in the same deployment,
if you were ambitious)

Kenn

On Thu, Jul 28, 2022 at 10:31 AM Bharath Kumara Subramanian <
codin.mart...@gmail.com> wrote:

> Hi,
>
> We are currently working on making beam portable mode mainstream in
> addition to supporting classic mode for Samza runner.
>
> I was looking at OSS benchmarks on how other runners performed in portable
> mode in comparison with the classic mode. However, all I found was performance
> numbers and metrics for various classic runners
> <http://metrics.beam.apache.org/d/1/getting-started?orgId=1&viewPanel=123125>
> .
>
> Checking in to see if anyone in the community has benchmarked portable
> mode numbers for their runners.
>
> Additionally, I found vanilla metrics around GRPC performance
> <https://grafana-dot-grpc-testing.appspot.com/?orgId=1>, although I am
> looking for pointers to get granular insights on E2E pipeline latency.
> e.g., the time spent on network across stages vs serialization cost for
> GRPC vs actual time spent executing the ParDO and so-on.
>
>
> Thanks,
> Bharath
>
>

Reply via email to