You need to perform Garbage Collection tune up on your JVM to handle the OOM

Sent from my iPhone

On May 10, 2012, at 21:06, "Jasper Floor" <jasper.fl...@m4n.nl> wrote:

> Hi all,
> 
> we've been running Solr 1.4 for about a year with no real problems. As
> of monday it became impossible to do a full import on our master
> because of an OOM. Now what I think is strange is that even after we
> more than doubled the available memory there would still always be an
> OOM.  We seem to have reached a magic number of documents beyond which
> Solr requires infinite memory (or at least more than 2.5x what it
> previously needed which is the same as infinite unless we invest in
> more resources).
> 
> We have solved the immediate problem by changing autocommit=false,
> holdability="CLOSE_CURSORS_AT_COMMIT", batchSize=10000. Now
> holdability in this case I don't think does very much as I believe
> this is the default behavior. BatchSize certainly has a direct effect
> on performance (about 3x time difference between 1 and 10000). The
> autocommit is a problem for us however. This leaves transactions
> active in the db which may block other processes.
> 
> We have about 5.1 million documents in the index which is about 2.2 gigabytes.
> 
> A full index is a rare operation with us but when we need it we also
> need it to work (thank you captain obvious).
> 
> With the settings above a full index takes 15 minutes. We anticipate
> we will be handling at least 10x the amount of data in the future. I
> actually hope to have solr 4 by then but I can't sell a product which
> isn't finalized yet here.
> 
> 
> Thanks for any insight you can give.
> 
> mvg,
> Jasper

Reply via email to