To: java-user@lucene.apache.org
> Subject: RE: Read large size index
>
>
> Thanks Uwe,
>
> can you please give me a code snippet , so that i can resolve
> my
> issue , please
>
>
>
> The correct way to iterate over all results is to use a custom
>
Thanks Uwe,
can you please give me a code snippet , so that i can resolve my
issue , please
The correct way to iterate over all results is to use a custom HitCollector
(Collector in 2.9) instance. The HitCollector's method collect(docid, score)
is called for every hit. No need to a
Sent: Tuesday, June 30, 2009 2:31 PM
> To: java-user@lucene.apache.org
> Subject: Re: Read large size index
>
>
>
>
> Hi there,
>
> On Tue, Jun 30, 2009 at 12:41 PM, m.harig wrote:
> >
> > Thanks Simon ,
> >
> > Its working now , th
On Tue, Jun 30, 2009 at 2:30 PM, m.harig wrote:
>
>
>
> Hi there,
>
> On Tue, Jun 30, 2009 at 12:41 PM, m.harig wrote:
>>
>> Thanks Simon ,
>>
>> Its working now , thanks a lot , i've a doubt
>>
>> i've got 30,000 pdf files indexed , but if i use the code which you
>> sent , returns
Hi there,
On Tue, Jun 30, 2009 at 12:41 PM, m.harig wrote:
>
> Thanks Simon ,
>
> Its working now , thanks a lot , i've a doubt
>
> i've got 30,000 pdf files indexed , but if i use the code which you
> sent , returns only 200 results , because am setting TopDocs topDocs =
> se
Hi there,
On Tue, Jun 30, 2009 at 12:41 PM, m.harig wrote:
>
> Thanks Simon ,
>
> Its working now , thanks a lot , i've a doubt
>
> i've got 30,000 pdf files indexed , but if i use the code which you
> sent , returns only 200 results , because am setting TopDocs topDocs =
> searc
Thanks Simon ,
Its working now , thanks a lot , i've a doubt
i've got 30,000 pdf files indexed , but if i use the code which you
sent , returns only 200 results , because am setting TopDocs topDocs =
searcher.search(query,200); as i said if use Integer.MAX_VALUE , it return
Hey there,
On Tue, Jun 30, 2009 at 10:41 AM, wrote:
> Thanks Simon
>
> this is my code , but am getting null ,
>
> IndexReader open = IndexReader.open(indexDir);
>
> IndexSearcher searcher = new IndexSearcher(open);
>
> final String fName = "contents";
>
>
Thanks SImon ,
Example:
IndexReader open = IndexReader.open("/tmp/testindex/");
IndexSearcher searcher = new IndexSearcher(open);
final String fName = "test";
is fName a field like summary , contents??
TopDocs topDocs = searcher.search(new TermQuery(new Term(fName,
"lucene")),
On Mon, Jun 29, 2009 at 6:36 PM, m.harig wrote:
>
> Thanks Simon ,
>
> Hey there, that makes things easier. :)
>
> ok here are some questions:
>
Do you iterate over all docs calling hits.doc(i) ?If so do you have to
> load all fields to render your results, if not you should not retrieve
> all
Thanks Simon ,
Hey there, that makes things easier. :)
ok here are some questions:
>>>Do you iterate over all docs calling hits.doc(i) ?If so do you have to
load all fields to render your results, if not you should not retrieve
all of them?
Yes, am iterating over all docs by calling hits.doc
Hey there, that makes things easier. :)
ok here are some questions:
Do you iterate over all docs calling hits.doc(i) ?If so do you have to
load all fields to render your results, if not you should not retrieve
all of them?
You use IndexSearchersearch(Query q,...) which returns a Hits object
have
Thanks again,
Did i index my files correctly, please need some tips, the following
is the error when i run my keyword , i typed pdf , thats it , because i've
got around 30,000 files named pdf,
HTTP Status 500 -
type Exception report
message
description The server encountered a
On Mon, Jun 29, 2009 at 3:07 PM, m.harig wrote:
>
> Thanks Simon ,
>
> This is how am indexing my documents ,
>
>
> indexWriter.addDocument(doc, new StopAnalyzer());
>
>
> indexWriter.setMergeFactor(10);
>
> indexWriter.setMaxBufferedDocs(100);
Thanks Simon ,
This is how am indexing my documents ,
indexWriter.addDocument(doc, new StopAnalyzer());
indexWriter.setMergeFactor(10);
indexWriter.setMaxBufferedDocs(100);
indexWriter.setMaxMergeDocs(Integer.MAX_VA
On Mon, Jun 29, 2009 at 2:55 PM, m.harig wrote:
>
> Thanks Simon
>
> I don't run any application on the tomcat , moreover i restarted
> it , am not doing any jobs except searching , we've a 500GB drive , we've
> indexed around 100,000 documents , it gives me around 1GB index . When i
> tr
Thanks Simon
I don't run any application on the tomcat , moreover i restarted
it , am not doing any jobs except searching , we've a 500GB drive , we've
indexed around 100,000 documents , it gives me around 1GB index . When i
tried to search pdf i got the heap space error ,
--
View t
Well, with this information I can hardly tell what the cause of the
OOM is. It would be really really helpful if you could figure out
where it happens. Do you get the OOM on the first try? I guess you do
not do any indexing in the background?!
What is your index "layout" I mean what kind of fields
Simon Willnauer wrote:
>
> On Mon, Jun 29, 2009 at 1:48 PM, m.harig wrote:
>>
>>
>>
>> Simon Willnauer wrote:
>>>
>>> Hey there,
>>> before going out to use hadoop (hadoop mailing list would help you
>>> better I guess) you could provide more information about you
>>> situation. For instance:
>
On Mon, Jun 29, 2009 at 1:48 PM, m.harig wrote:
>
>
>
> Simon Willnauer wrote:
>>
>> Hey there,
>> before going out to use hadoop (hadoop mailing list would help you
>> better I guess) you could provide more information about you
>> situation. For instance:
>> - how big is you index
>> - version of
Simon Willnauer wrote:
>
> Hey there,
> before going out to use hadoop (hadoop mailing list would help you
> better I guess) you could provide more information about you
> situation. For instance:
> - how big is you index
> - version of lucene
> - which java vm
> - how much heap space
> - where
Hey there,
before going out to use hadoop (hadoop mailing list would help you
better I guess) you could provide more information about you
situation. For instance:
- how big is you index
- version of lucene
- which java vm
- how much heap space
- where does the OOM occure
or maybe there is already
22 matches
Mail list logo