Yes, we do have a large number of unique field names in that index, because
they are driven by user named fields in our application (with some cleaning to
remove illegal chars).
This slowness problem has appeared very suddenly in the last couple of weeks
and the number of unique field names has
On Wed, Nov 3, 2010 at 4:27 PM, Mark Kristensson
wrote:
>
> I've run checkIndex against the index and the results are below. That net is
> that it's telling me nothing is wrong with the index.
Thanks.
> I did not have any instrumentation around the opening of the IndexSearcher
> (we don't use
I've run checkIndex against the index and the results are below. That net is
that it's telling me nothing is wrong with the index.
I did not have any instrumentation around the opening of the IndexSearcher (we
don't use an IndexReader), just around the actual query execution so I had to
add so
I'd even offer, if the index is small, perhaps you can post it
somewhere for us to download and debug trace commit()…
Also, though not very scientific, you can turn on debug messages by
setting an infoSfream and observe which print take the most to appear.
Not very accurate but if there's one oper
Can you run CheckIndex (command line tool) and post the output?
How long does it take to open a reader on this same index, and perform
a simple query (eg TermQuery)?
Mike
On Wed, Nov 3, 2010 at 2:53 PM, Mark Kristensson
wrote:
> I've successfully reproduced the issue in our lab with a copy from
> It turns out that the prepareCommit() is the slow call here, taking several
> seconds to complete.
>
> I've done some reading about it, but have not found anything that might be
> helpful here. The fact that it is slow
> every single time, even when I'm adding exactly one document to the index,
I've successfully reproduced the issue in our lab with a copy from production
and have broken the close() call into parts, as suggested, with one addition.
Previously, the call was simply
...
} finally {
// Close
if (indexWriter != null) {
Is there a behavioral difference between:
Query query = new FilteredQuery(query, filter1);
searcher.search(query, filter2, n);
...and:
ChainedFilter filter = new ChainedFilter(
new Filter[]{filter1, filter2}, ChainedFilter.AND);
searcher.search(query, filter, n);
I chose
Thanks very much, I got it.
-Original Message-
From: Simon Willnauer [mailto:simon.willna...@googlemail.com]
Sent: Tuesday, November 02, 2010 11:28 PM
To: Lance Norskog
Cc: java-user@lucene.apache.org
Subject: Re: How to handle more than Integer.MAX_VALUE documents?
On Wed, Nov 3, 2010 a
I'm assuming you're down in Lucene land. Unless somehow you've
gotten 63 separate filters when you think you only have one, I don't
think what you're doing will work. Or I'm failing to understand what
you're doing at all.
The problem is I expect each of your indexes starts with document
1. So your
Hi. We have a large index (~ 28 GB) which is distributed in three different
directories, each representing a country. Each of these country wise indexes
is further distributed on the basis of last update date into 21 smaller
indexes. This index is updated once in a day.
A user can search into any
11 matches
Mail list logo