Hi,
I'm working with Lucene 2.4.0 and the JVM (JDK 1.6.0_07). I'm
consistently receiving "OutOfMemoryError: Java heap space", when trying
to index large text files.
Example 1: Indexing a 5 MB text file runs out of memory with a 16 MB
max. heap size. So I increased the max. heap size to 51
This issue is still open. Any suggestions/help with this would be
greatly appreciated.
Thanks,
Paul
-Original Message-
From: java-user-return-42080-paul_murdoch=emainc@lucene.apache.org
[mailto:java-user-return-42080-paul_murdoch=emainc@lucene.apache.org
] On Behalf Of
ot; the
indexing of large files to control memory usage.
Thanks,
Paul
-Original Message-
From: java-user-return-42271-paul_murdoch=emainc@lucene.apache.org
[mailto:java-user-return-42271-paul_murdoch=emainc@lucene.apache.org
] On Behalf Of Dan OConnor
Sent: Friday, September 11, 2
shouldn't matter. At least the file will be
indexed correctly.
Thanks,
Paul
-Original Message-
From: java-user-return-42272-paul_murdoch=emainc@lucene.apache.org
[mailto:java-user-return-42272-paul_murdoch=emainc@lucene.apache.org] On
Behalf Of Glen Newton
Sent: Frida
java-user-return-42277-paul_murdoch=emainc@lucene.apache.org
[mailto:java-user-return-42277-paul_murdoch=emainc@lucene.apache.org] On
Behalf Of Glen Newton
Sent: Friday, September 11, 2009 10:44 AM
To: java-user@lucene.apache.org
Subject: Re: Indexing large files? - No answers yet...
Pau
by id and merge them into one large document after they are in
the index? That was my plan to work around OOM and achieve the same end result
as trying to index the large document in one shot.
Paul
-Original Message-----
From: java-user-return-42283-paul_murdoch=emainc@luce
Hi all,
Since you can't (and it doesn't make sense to) use wildcards in phrase
queries, how do you construct a query to get results for phrases that
begin with a certain set of terms? Here are some theoretical
examples...
Example 1 - I have an index where each document contains the content