I think you can modify the response writer and stream results instead of
building them first and then sending in one go. I am using this technique
to dump millions of docs in json format - but in your case you may have to
figure out how to dump during streaming if you don't want to save data to
disk first.

Roman
On 17 Jun 2013 20:02, "Mark Miller" <markrmil...@gmail.com> wrote:

> There is a java cmd line arg that lets you run a command on OOM - I'd
> configure it to log and kill -9 Solr. Then use runit or something to
> supervice Solr - so that if it's killed, it just restarts.
>
> I think that is the best way to deal with OOM's. Other than that, you have
> to write a middle layer and put limits on user requests before making Solr
> requests.
>
> - Mark
>
> On Jun 17, 2013, at 4:44 PM, Manuel Le Normand <manuel.lenorm...@gmail.com>
> wrote:
>
> > Hello again,
> >
> > After a heavy query on my index (returning 100K docs in a single query)
> my
> > JVM heap's floods and I get an JAVA OOM exception, and then that my
> > GCcannot collect anything (GC
> > overhead limit exceeded) as these memory chunks are not disposable.
> >
> > I want to afford queries like this, my concern is that this case
> provokes a
> > total Solr crash, returning a 503 Internal Server Error while trying to *
> > index.*
> >
> > Is there anyway to separate these two logics? I'm fine with solr not
> being
> > able to return any response after returning this OOM, but I don't see the
> > justification the query to flood JVM's internal (bounded) buffers for
> > writings.
> >
> > Thanks,
> > Manuel
>
>

Reply via email to