walterddr commented on PR #10349:
URL: https://github.com/apache/pinot/pull/10349#issuecomment-1448505451

   > In v2 engine at present we use the same thread-pool for both of the above 
which can lead to a dead-lock in cases when the thread-pool is fully occupied 
with threads blocked on the queue.
   
   can you elaborate more on "threads blocked on the queue"? which queue are we 
talking about? 
   > 
   > In this PR, I have tried to align v2 engine with v1 engine:
   > 
   > 1. query_worker_* threads will run the per-segment operators (same as 
worker-threads in v1) and the v2 operators.
   > 2. query_runner_* threads will run the BaseCombineOperator#mergeResults 
for leaf-stage. For intermediate stages they'll be responsible for creating the 
physical plan and submitting the OpChain to the scheduler.
   
   This is the desired execution model IMO. follow up 
   1. we should submit stage plans in a bottom-up manner instead of top-down to 
avoid potentially launching opChain
   2. we should use query scheduler or resource manager so that when servers 
are occupied we don't submit any additional queries instead we should buffer on 
the broker side.
   
   > Also, for long running queries the runner threads may spend a lot of time 
blocked on worker threads. That may also need to be revisited.
   
   - for intermediate stage this should not be true. the runner thread exits 
when opChain is registered
   - for leaf stage, unfortunately not all the combine operators are running on 
a poll basis so the issue you mentioned could happen (but probably only for the 
currently concurrent indexed table group-by combine, not the rest which uses 
queue to poll segment results and will not block/busy-wait)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to