Hi Luke!
I agree `sdk_worker_parallelism`don't change after job submission.
However, users can change the configuration from m -> n over a period of
time.
Having this information as a metric helps in observing the
behavior/impact of the job with the config change.
[1]
https://github.com/apac
Flink has a set of workers, each worker has a number of task slots. A
pipeline will use the number of slots based upon what it was configured to
run with.
Are you trying to get the total number of workers, total number of tasks
slots, number of task slots your pipeline is using or number of worker
Hi Luke!
Thanks !! We use the Flink Runner and run SDK workers as processes [1]
within a k8s pod. Can you please share broad steps on how one can do in the
runner ?
[1]
https://github.com/apache/beam/blob/master/runners/java-fn-execution/src/main/java/org/apache/beam/runners/fnexecution/envir
That code only executes within a runner and is only used by certain runners
and wouldn't work in general from user code that is monitoring the job or
user code executing within one of the workers.
You would need to author code that is likely runner specific to look up the
number of workers associa
Hi Users!
Is there a recommended approach to publish metrics on the number of sdk
workers available/running as a gauge ?
[1]
https://github.com/apache/beam/blob/master/runners/java-fn-execution/src/main/java/org/apache/beam/runners/fnexecution/control/DefaultJobBundleFactory.java#L267
[2]
htt