() on it, so its facets are re-indexed too.
Shai
On Wed, Jul 3, 2013 at 8:52 PM, Peng Gao p...@esri.com wrote:
Shai,
Thanks.
I went with option #3 since the temp indexes are actually created in
separate processes in my case.
It works.
Now one more complication.
I have a case
to
return the modified live docs.
Same as option 1, but you don't actually do the delete operation, which is
more costly than just unsetting a bit.
Shai
On Fri, Jul 5, 2013 at 6:10 PM, Peng Gao p...@esri.com wrote:
Shai,
Once again, thanks for the help.
Yes, I am re-indexing. Using
such
exceptions since one index may have bigger ordinals than what the taxonomy
reader knows about.
Can you share a little bit about your scenario and why do you need to use a
MultiReader?
Shai
On Tue, Jul 2, 2013 at 3:31 AM, Peng Gao p...@esri.com wrote:
How do I accumulate counts over
know if that works for you.
Shai
On Wed, Jul 3, 2013 at 6:14 PM, Peng Gao p...@esri.com wrote:
Hi Shai,
Thanks for the reply.
Yes I used a single TaxonomyReader instance.
I am adding facets to an existing app, which maintains two indexes,
one for indexing system tools
How do I accumulate counts over a MultiReader (2 IndexReader)?
The following code causes an IOException:
ArrayListFacetRequest facetRequests = new ArrayListFacetRequest();
for (String groupField : groupFields)
facetRequests.add(new CountFacetRequest(new
documentation has a large list of per-language analyzers. EnglishAnalyzer
is under the org.apache.lucene.analysis.en package:
http://lucene.apache.org/core/4_1_0/analyzers-
common/org/apache/lucene/analysis/en/package-summary.html
Steve
On Feb 28, 2013, at 1:28 PM, Peng Gao p...@esri.com