bvolpato commented on code in PR #28513:
URL: https://github.com/apache/beam/pull/28513#discussion_r1340532347
##########
runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingDataflowWorker.java:
##########
@@ -2027,6 +2034,10 @@ private void updateVMMetrics() {
private void updateThreadMetrics() {
timeAtMaxActiveThreads.getAndReset();
timeAtMaxActiveThreads.addValue(workUnitExecutor.allThreadsActiveTime());
+ activeThreads.getAndReset();
+ activeThreads.addValue(workUnitExecutor.activeCount());
+ maxActiveThreads.getAndReset();
+ maxActiveThreads.addValue(chooseMaximumNumberOfThreads());
Review Comment:
I see. Yeah it's a bit confusing/misleading, but I guess you are just
replicating what we have for javaHarnessUsedMemory/javaHarnessMaxMemory?
Giving it's cumulative, I feel like using a long makes more sense than int.
For the default case of 300 threads, we could only collect a limited number of
times before overflowing.
Today the default interval (`windmillHarnessUpdateReportingPeriod`) is 10s,
so that would blow up after 2147483647/300/(60*60/10*24*365) = ~2 years.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]