Hi Nicola,
I didn't read the code examples, but I'll relate to your last question
regarding the Aggregator. Indeed, with Lucene 4.2,
FacetRequest.createAggregator is not called by the default
FacetsAccumulator. This method should go away from FacetRequest entirely,
but unfortunately we did not
Hi,
I'm currently calling:
FacetsCollector.create(new StandardFacetsAccumulator(facetSearchParams,
indexReader, getTaxonomyReader())
that is calling FacetRequest.createAggregator(...)
and is not working properly;
I'm extending the CountingAggregator and than Aggregator, if I override
Hi,
I have the following scenario: I have an index of very large size
(although I'm testing with around 200,000 documents, but should scale to
many millions) and I want to perform a search on a certain field.
According to that search, I would like to manipulate a different field
for all the
Hi Igor,
About your performance problem with SpanQueries and Payloads:
Try to filter with the corresponding BooleanQuery and use a profiler.
You have an IO-bottleneck because of reading position and payload
information per document.
Possible it would help if you first filter off the obviously
Hi,
After finishing indexing, we tried to consolidate all segments using
forcemerge, but we continuously get out of memory error even if we
increased the memory up to 4GB.
Exception in thread main java.lang.IllegalStateException: this writer hit
an OutOfMemoryError; cannot complete forceMerge
merging binarydocvalues doesn't use any RAM, it streams the values from the
segments its merging directly to the newly written segment.
So if you have this problem, its unrelated to merging: it means you don't
have enough RAM to support all the stuff you are putting in these
binarydocvalues
Strange. This happens after I added all documents using
IndexWriter.addDocument() function. Everything works well at that point.
I then call IndexWriter.forceMerge(1), finally IndexWriter.close(true).
The out of memory problem happens after I called forceMerge(1) but before
close(true).
If