One of my users requested it, they are less aware of what's allowed and I
don't want apriori blocking them for long specific request (there are other
params that might end up OOMing me).

I thought of timeAllowed restriction, but also this solution cannot
guarantee during this delay I would not get the JVM heap flooded (for
example I already have all cashed and my RAM io's are very fast)


On Mon, Jun 17, 2013 at 11:47 PM, Walter Underwood <wun...@wunderwood.org>wrote:

> Don't request 100K docs in a single query. Fetch them in smaller batches.
>
> wunder
>
> On Jun 17, 2013, at 1:44 PM, Manuel Le Normand wrote:
>
> > Hello again,
> >
> > After a heavy query on my index (returning 100K docs in a single query)
> my
> > JVM heap's floods and I get an JAVA OOM exception, and then that my
> > GCcannot collect anything (GC
> > overhead limit exceeded) as these memory chunks are not disposable.
> >
> > I want to afford queries like this, my concern is that this case
> provokes a
> > total Solr crash, returning a 503 Internal Server Error while trying to *
> > index.*
> >
> > Is there anyway to separate these two logics? I'm fine with solr not
> being
> > able to return any response after returning this OOM, but I don't see the
> > justification the query to flood JVM's internal (bounded) buffers for
> > writings.
> >
> > Thanks,
> > Manuel
>
>
>
>
>
>

Reply via email to