Where shall i post this issue.
I am naive to Lucene.
And about IndexWriter Closing.
Now i am trying like this:
1. Open New IndexReader.
2. Delete Documents.
3. Close IndexReader.
4. Open New IndexWriter.
5. Write Documents.
6. Close IndexWriter.
7. Repeat the process for n times the in nth time
Hello,
Company AB, ...). With this I´d like to search for documents that has
daniel and president on the same field, because in a same
text, can exist
daniel and president in different fields. Is this possible??
Not totally sure wether I understand your problem, because it does not sound
27 jul 2007 kl. 10.50 skrev miztaken:
My application simply shut downs.
After that when i try to open the same index using IndexReader and
fetch the
document then it says trying to access deleted document. After
getting
such error, i opened the indexWriter, optimized and then closed it.
Can you use IndexWriter#deleteDocument instead?
No i cant use this method.
I dont know docid and i dont wanna search for it. It will only add extra
time.
I am deleting the document on the basis of unique key field.
Can you please supply an isolated and working test case that
demonstrate your
Actually no,
Because I'd like to retrieve terms that were computed on the same
instance of Field. Taking your example to ilustrate better, I have 2
documents, on documentA I structured one field, Field(fieldA, termA
termB, customAnalyzer). On documentB I structured 2 fields, Field(fieldA,
Hi guys,
I would like to know if exist some limit of size for the fields of a
document.
I'm with the following problem:
When a term is after a certain amount of characters (approximately 87300) in
a field, the search does not find de occurrency.
If I divide my field in pages, the terms are found
27 jul 2007 kl. 13.43 skrev miztaken:
Can you use IndexWriter#deleteDocument instead?
No i cant use this method.
I dont know docid and i dont wanna search for it. It will only add
extra
time.
I am deleting the document on the basis of unique key field.
You can do that with
I guess this also ties in with 'getPositionIncrementGap', which is relevant
to fields with multiple occurrences.
Peter
On 7/27/07, Peter Keegan [EMAIL PROTECTED] wrote:
I have a question about the way fields are analyzed and inverted by the
index writer. Currently, if a field has multiple
Every once in a while I got the following exception with Lucene 2.2. Do you
have any idea?
Thanks,
java.lang.NullPointerException
at
org.apache.lucene.index.MultiReader.getFieldNames(MultiReader.java:264)
at
What the conditions you are following when running lucene - like
configuration, parameters..can you describe more?
thanks,
dt,
www.ejinz.com
Search Engine News
- Original Message -
From: testn [EMAIL PROTECTED]
To: java-user@lucene.apache.org
Sent: Friday, July 27, 2007 7:50 PM
- Using Spring Module 0.8a
- Using RAM directory
- Having about 100,000 documents
- Index all documents in one thread
- Perform the optimize only at the end of the indexing process
- Using Lucene 2.2
Dmitry-17 wrote:
What the conditions you are following when running lucene - like
Has anyone done any benchmarking of Lucene running with the index
stored on a SSD?
Given the performance characteristics quoted for, say, the SANDISK
devices (eg
http://www.sandisk.com/OEM/ProductCatalog(1321)-SanDisk_SSD_SATA_5000_25.aspx:
7000 IO/sec for 512 byte requests, 67MB/sec sustained
12 matches
Mail list logo