As docIDs are ints too, it's most likely he'll hit the limit of 2B documents per index though with
that approach though :)
I do agree that indexing huge documents doesn't seem to have a lot of value, even when you
know a doc is a hit for a certain query, how are you going to display the results to users?
John, for huge data set, it's usually a good idea to roll your own distributed indexes, and model
you data schema very carefully. For example, if you are going to index log files, one reasonable
idea is to make every 5 minutes of logs a document.
Regards,
Tri
On Feb 14, 2014, at 01:20 PM, Glen Newton <glen.new...@gmail.com> wrote:
On Feb 14, 2014, at 01:20 PM, Glen Newton <glen.new...@gmail.com> wrote:
You should consider making each _line_ of the log file a (Lucene)
document (assuming it is a log-per-line log file)
-Glen
On Fri, Feb 14, 2014 at 4:12 PM, John Cecere <john.cec...@oracle.com> wrote:I'm not sure in today's world I would call 2GB 'immense' or 'enormous'. Atany rate, I don't have control over the size of the documents that go intomy database. Sometimes my customer's log files end up really big. I'mwilling to have huge indexes for these things.Wouldn't just changing from int to long for the offsets solve the problem ?I'm sure it would probably have to be changed in a lot of places, but whyimpose such a limitation ? Especially since it's using an InputStream andonly dealing with a block of data at a time.I'll take a look at your suggestion.Thanks,JohnOn 2/14/14 3:20 PM, Michael McCandless wrote:Hmm, why are you indexing such immense documents?In 3.x Lucene never sanity checked the offsets, so we would silentlyindex negative (int overflow'd) offsets into e.g. term vectors.But in 4.x, we now detect this and throw the exception you're seeing,because it can lead to index corruption when you index the offsetsinto the postings.If you really must index such enormous documents, maybe you couldcreate a custom tokenizer (derived from StandardTokenizer) that"fixes" the offset before setting them? Or maybe just doesn't evenset them.Note that position can also overflow, if your documents get too large.Mike McCandlesshttp://blog.mikemccandless.comOn Fri, Feb 14, 2014 at 1:36 PM, John Cecere <john.cec...@oracle.com>wrote:I'm having a problem with Lucene 4.5.1. Whenever I attempt to index afile >2GB in size, it dies with the following exception:java.lang.IllegalArgumentException: startOffset must be non-negative, andendOffset must be >= startOffset,startOffset=-2147483648,endOffset=-2147483647Essentially, I'm doing this:Directory directory = new MMapDirectory(indexPath);Analyzer analyzer = new StandardAnalyzer();IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_45,analyzer);IndexWriter iw = new IndexWriter(directory, iwc);InputStream is = <my input stream>;InputStreamReader reader = new InputStreamReader(is);Document doc = new Document();doc.add(new StoredField("fileid", fileid));doc.add(new StoredField("pathname", pathname));doc.add(new TextField("content", reader));iw.addDocument(doc);It's the IndexWriter addDocument method that throws the exception. Inlooking at the Lucene source code, it appears that the offsets being usedinternally are int, which makes it somewhat obvious why this ishappening.This issue never happened when I used Lucene 3.6.0. 3.6.0 was perfectlycapable of handling a file over 2GB in this manner. What has changed andhowdo I get around this ? Is Lucene no longer capable of handling files thislarge, or is there some other way I should be doing this ?Here's the full stack trace sans my code:java.lang.IllegalArgumentException: startOffset must be non-negative, andendOffset must be >= startOffset,startOffset=-2147483648,endOffset=-2147483647atorg.apache.lucene.analysis.tokenattributes.OffsetAttributeImpl.setOffset(OffsetAttributeImpl.java:45)atorg.apache.lucene.analysis.standard.StandardTokenizer.incrementToken(StandardTokenizer.java:183)atorg.apache.lucene.analysis.standard.StandardFilter.incrementToken(StandardFilter.java:49)atorg.apache.lucene.analysis.core.LowerCaseFilter.incrementToken(LowerCaseFilter.java:54)atorg.apache.lucene.analysis.util.FilteringTokenFilter.incrementToken(FilteringTokenFilter.java:82)atorg.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:174)atorg.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:248)atorg.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:254)atorg.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:446)atorg.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1551)atorg.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1221)atorg.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1202)Thanks,John--John CecerePrincipal Engineer - Oracle Corporation732-987-4317 / john.cec...@oracle.com---------------------------------------------------------------------To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.orgFor additional commands, e-mail: java-user-h...@lucene.apache.org---------------------------------------------------------------------To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.orgFor additional commands, e-mail: java-user-h...@lucene.apache.org--John CecerePrincipal Engineer - Oracle Corporation732-987-4317 / john.cec...@oracle.com---------------------------------------------------------------------To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.orgFor additional commands, e-mail: java-user-h...@lucene.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org