: I really got no OOM.
You said you are doing sorts right? ... most likley the FieldCache is
being used, it maintains a WeakHashMap to large arrays keyed off of
your IndexReader, so if you are constantly opening new readers/searchers
then those old refs will stay arround in the FieldCache ntill g
Exactly, Java will only "free" memory when it needs to. And if you
set a maximum heap, under most circumstances of "heavy load" you will
reach the max before Java attempts to free anything. This is done for
performance reasons.
There are options for the garbage collector that control how of
I really got no OOM.
So, the impression I have is that there is some kind of cache static who
uses the free memory for the IndexSearcher, if I set -Xmx16m the application
uses the entire 16m, if a set -Xmx512m after some time the application uses
the entire 512m, the way that even if a instanciat
Problem is, there is no way to force a gc. Runtime.gc() only requests
that a gc be performed - if the CPU is stressed you will not get a GC.
On Jul 3, 2006, at 12:50 PM, Chuck Williams wrote:
I'd suggest forcing gc after each n iteration(s) of your loop to
eliminate the garbage factor. Also,
I'd suggest forcing gc after each n iteration(s) of your loop to
eliminate the garbage factor. Also, you can run a profiler to see which
objects are leaking (e.g., the netbeans profiler is excellent). Those
steps should identify any issues quickly.
Chuck
robert engels wrote on 07/03/2006 07:40
Did you try what was suggested? (-Xmx16m) and did you get an OOM? If
not, there is no memory leak.
On Jul 3, 2006, at 12:33 PM, Bruno Vieira wrote:
Thanks for the answer, but I have isolated the cycle inside a loop
on a
static void main (String args[]) Class to test this issue.In this
case
Thanks for the answer, but I have isolated the cycle inside a loop on a
static void main (String args[]) Class to test this issue.In this case there
were no classes referencing the IndexSercher and the problem still happened.
2006/7/3, robert engels <[EMAIL PROTECTED]>:
You may not have a memo
You may not have a memory leak at all. It could just be garbage
waiting to be collected. I am fairly certain there are no "memory
leaks" in the current Lucene code base (outside of the ThreadLocal
issue).
A simple way to verify this would be to add -Xmx16m on the command
line. If there we
Hi everyone,
I am working on a project with around 35000 documents (8 text fields with
256 chars at most for each field) on lucene. But unfortunately this index is
updated at every moment and I need that these new items be in the results of
my search as fast as possible.
I have an IndexSearcher,