kfaraz commented on code in PR #17535:
URL: https://github.com/apache/druid/pull/17535#discussion_r1882107499
##########
indexing-service/src/main/java/org/apache/druid/indexing/overlord/supervisor/SupervisorManager.java:
##########
@@ -56,13 +64,17 @@ public class SupervisorManager
// SupervisorTaskAutoScaler could be null
private final ConcurrentHashMap<String, SupervisorTaskAutoScaler>
autoscalers = new ConcurrentHashMap<>();
private final Object lock = new Object();
+ private final ListeningExecutorService shutdownExec;
private volatile boolean started = false;
@Inject
public SupervisorManager(MetadataSupervisorManager metadataSupervisorManager)
{
this.metadataSupervisorManager = metadataSupervisorManager;
+ this.shutdownExec = MoreExecutors.listeningDecorator(
+ Execs.multiThreaded(25, "supervisor-manager-shutdown-%d")
Review Comment:
25 may be excessive in some cases and inadequate in others. Maybe initialize
the executor lazily inside the `stop()` method, then the number of required
threads can be computed at run time. The `shutdownExec` need not be a
class-level field either.
---
Alternatively, instead of using a completely new executor, you could
consider using the `scheduledExec` inside each supervisor. That executor
basically just sits idle most of the time and is responsible only for
submitting `RunNotice` to the notice queue.
You could add a `stopAsync` method to `SeekableStreamSupervisor` that does
the following:
- returns a future that we coalesce and wait upon
- internally submits a runnable to the `scheduledExec` to perform the actual
`stop`
I guess the only thing we will miss out on is parallelizing the
`autoscaler.stop()` which should not be a concern, I guess?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]