[
https://issues.apache.org/jira/browse/BEAM-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17548765#comment-17548765
]
Danny McCormick commented on BEAM-10689:
----------------------------------------
This issue has been migrated to https://github.com/apache/beam/issues/20380
> Unskip test_metrics (py) in Spark runner
> ----------------------------------------
>
> Key: BEAM-10689
> URL: https://issues.apache.org/jira/browse/BEAM-10689
> Project: Beam
> Issue Type: Improvement
> Components: runner-spark, testing
> Reporter: Kyle Weaver
> Priority: P3
> Labels: portability-spark
> Time Spent: 1h
> Remaining Estimate: 0h
>
> For the test_metrics failure, I found that metrics are being passed to
> Python. The test breaks because no metrics match the filter [1], because the
> step names are transformed somehow such that the filter logic is too strict
> to recognize [2]:
> Spark Runner:
> MetricKey(step=ref_AppliedPTransform_count1_17,
> metric=MetricName(namespace=ns, name=counter), labels={}): 2
> MetricKey(step=ref_AppliedPTransform_count2_18,
> metric=MetricName(namespace=ns, name=counter), labels={}): 4
> ...
> Fn API Runner:
> MetricKey(step=count1, metric=MetricName(namespace=ns, name=counter),
> labels={}): 2,
> MetricKey(step=count2, metric=MetricName(namespace=ns, name=counter),
> labels={}): 4
> Also, note that Flink has its own, completely different implementation of
> test_metrics [3].
> [1]
> https://github.com/apache/beam/blob/2ef7b9db8af015dcba544b93df00a4e54cd8caf2/sdks/python/apache_beam/runners/portability/fn_api_runner/fn_runner_test.py#L744
> [2]
> https://github.com/apache/beam/blob/2ef7b9db8af015dcba544b93df00a4e54cd8caf2/sdks/python/apache_beam/metrics/metric.py#L151-L155
> [3]
> https://github.com/apache/beam/blob/2ef7b9db8af015dcba544b93df00a4e54cd8caf2/sdks/python/apache_beam/runners/portability/flink_runner_test.py#L251
--
This message was sent by Atlassian Jira
(v8.20.7#820007)