[
https://issues.apache.org/jira/browse/SOLR-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16620818#comment-16620818
]
Randy Fradin commented on SOLR-6930:
------------------------------------
Thanks [~erickerickson] we do use that and it is useful, but time spent
processing a query is only a rough proxy for the memory expense of a query,
plus of course we're missing the blocking feature (which timeAllowed sort of
gives you on the time dimension, except that it's opt-in from the client not
enforceable on all queries by the server, and can't be used with cursorMark,
and can't be relied on to kill a query when the time expires regardless of
where it is in its execution process).
We have problems both with poorly thought out queries leading to OOM and with
queries that don't quite lead to an OOM but do cause memory allocation to
happen fast enough that the garbage collector can't keep up, leading to full GC
pauses for up to 10s of seconds, sometimes long enough to cause the ZooKeeper
session to expire and put all cores into the down status and subsequent
recovery process.
Long way of saying, a memory circuit breaker would be very useful :)
> Provide "Circuit Breakers" For Expensive Solr Queries
> -----------------------------------------------------
>
> Key: SOLR-6930
> URL: https://issues.apache.org/jira/browse/SOLR-6930
> Project: Solr
> Issue Type: Improvement
> Components: search
> Reporter: Mike Drob
> Priority: Major
>
> Ref:
> http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html
> ES currently allows operators to configure "circuit breakers" to preemptively
> fail queries that are estimated too large rather than allowing an OOM
> Exception to happen. We might be able to do the same thing.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]