[ 
https://issues.apache.org/jira/browse/IMPALA-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16831834#comment-16831834
 ] 

Tim Armstrong commented on IMPALA-7486:
---------------------------------------

I discussed this offline with [~bikramjeet.vig] and we think that this is a 
somewhat complex issue, because query execution *can* be resource-intensive on 
the coordinator. It would be good if we can do this work incrementally rather 
than deferring it until we've done all the work to make the coordinator work 
lightweight. I think we can do it like this:

* Identify a subset of queries that we can determine are not resource-intensive 
on the coordinator, and reserve less memory for them for admission control 
purposes. Currently this might just be queries without runtime filters and with 
lightweight coordinator fragments.
* Expand that subset of queries by doing things like IMPALA-3825 and 
IMPALA-8483.

> Admit less memory on dedicated coordinator for admission control purposes
> -------------------------------------------------------------------------
>
>                 Key: IMPALA-7486
>                 URL: https://issues.apache.org/jira/browse/IMPALA-7486
>             Project: IMPALA
>          Issue Type: Improvement
>          Components: Backend
>            Reporter: Tim Armstrong
>            Assignee: Bikramjeet Vig
>            Priority: Major
>
> Following on from IMPALA-7349, we should consider handling dedicated 
> coordinators specially rather than admitting a uniform amount of memory on 
> all backends.
> The specific scenario I'm interested in targeting is the case where we a 
> coordinator that is executing many "lightweight" coordinator fragments, e.g. 
> just an ExchangeNode and PlanRootSink, plus maybe other lightweight operators 
> like UnionNode that don't use much memory or CPU. With the current behaviour 
> it's possible for a coordinator to reach capacity from the point-of-view of 
> admission control when at runtime it is actually very lightly loaded.
> This is particularly true if coordinators and executors have different 
> process mem limits. This will be somewhat common since they're often deployed 
> on different hardware or the coordinator will have more memory dedicated to 
> its embedded JVM for the catalog cache.
> More generally we could admit different amounts per backend depending on how 
> many fragments are running, but I think this incremental step would address 
> the most important cases and be a little easier to understand.
> We may want to defer this work until we've implemented distributed runtime 
> filter aggregation, which will significantly reduce coordinator memory 
> pressure, and until we've improved distributed overadmission (since the 
> coordinator behaviour may help throttle overadmission ).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to