Well, I use neither Eclipse nor your application server and can offer
no advice on any differences in behaviour between the two. Maybe you
should try Eclipse or app server forums.
If you are going to index the complete contents of a file as one field
you are likely to hit OOM exceptions. How big
Hello,
I get exception only when the code is fired from Eclipse.
When it is deployed on an application server, I get no exception at all.
This forced me to invoke the same code from Eclipse and check what is
the issue.,.
I ran the code on server with 8 GB memory.. Even then no excepti
So you do get an exception after all, OOM.
Try it without this line:
doc.add(new TextField("contents", new BufferedReader(new
InputStreamReader(fis, "UTF-8";
I think that will slurp the whole file in one go which will obviously
need more memory on larger files than on smaller ones.
Or just
Yes I know that Lucene should not have any document size limits. All I
get is a lock file inside my index folder. Along with this there's no
other file inside the index folder. Then I get OOM exception.
Please provide some guidance...
Here is the example:
package com.issue;
import org.apache
Lucene doesn't have document size limits.
There are default limits for how many tokens the highlighters will process ...
But, if you are passing each line as a separate document to Lucene,
then Lucene only sees a bunch of tiny documents, right?
Can you boil this down to a small test showing the
Any help would be highly appreciatedI am kind of struck and
unable to find out a possible solution..
On 8/29/2013 11:21 AM, Ankit Murarka wrote:
Hello all,
Faced with a typical issue.
I have many files which I am indexing.
Problem Faced:
a. File having size less than 20 MB are success