DaisyModi opened a new pull request, #4189:
URL: https://github.com/apache/gobblin/pull/4189

   ## Dear Gobblin maintainers,
   
   I am submitting the following PR for your kind review.
   
   ## JIRA
   - [GOBBLIN-XXXX](https://issues.apache.org/jira/browse/GOBBLIN-XXXX) (ticket 
pending)
   
   ## Description
   
   Increases two thread-pool defaults from 3 to 150 in `ServiceConfigKeys`:
   
   - `DEFAULT_NUM_SPEC_CATALOG_LISTENER_THREADS` — controls the executor used 
by `FlowCatalog`'s `SpecCatalogListenersList` for parallel flow compilation on 
the **submission path** (`POST /flowconfigs`).
   - `DEFAULT_NUM_DAG_PROC_THREADS` — controls the executor used by 
`DagProcessingEngine` on the **execution path**.
   
   ### Motivation
   
   Both pools ultimately invoke `MultiHopFlowCompiler.compileFlow`, which calls 
`DataMovementAuthorizer.isMovementAuthorized` 
([`MultiHopFlowCompiler.java:242`](https://github.com/apache/gobblin/blob/master/gobblin-service/src/main/java/org/apache/gobblin/service/modules/flow/MultiHopFlowCompiler.java#L242)).
 Under high authorization-service latency, the default pool size of 3 becomes 
the binding throughput limit on both paths and queues flow submissions and 
executions even when downstream resources have plenty of capacity.
   
   The pool sizes remain configurable via:
   - `gobblin.service.specCatalogListener.numThreads`
   - `gobblin.service.dagProcessingEngine.numThreads`
   
   Deployments preferring the previous behavior can override either back to 3.
   
   ### Tests
   
   Existing tests reference these constants symbolically (e.g. 
`DagProcessingEngineTest.java` uses 
`ServiceConfigKeys.DEFAULT_NUM_DAG_PROC_THREADS` directly), so they auto-adjust 
to the new default and require no changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to