sociations.html
>
Yes, I started exactly from here :)
I read these posts yesterday and I found them very useful to understand the
basics.
But today, when I tried to write some experiments using lucene 4.8.1, I
couldn't find some of the classes used by the code examples.
Thank you for your response and the useful link to the demo package.
Bye
*Raf*
me classes (e.g.
FacetSearchParams
or CountFacetRequest).
Is there an updated version of that guide?
I tried this
http://lucene.apache.org/core/*4_8_1*/facet/org/apache/lucene/facet/doc-files/userguide.html
but it does not work :|
Thank you for any help you can provide.
Regards,
*Raf*
ream of the search field?
Thank you in advance.
Bye
*Raf*
th this approach is that I would need to do some
"manual parsing"
in *translate *method to handle *lucene query syntax* *--> +, -, (, ), *and
so on.
I would like to extend *QueryParser* in order to avoid to re-do this job
(that is not a *translate* job, but a *parser* job).
Thanks,
Bye
*Raf*
h to achieve the same goal?
I am using *lucene 3.0.3* and, for now, I cannot upgrade to more recent
versions.
Thanks in advance,
Bye.
*Raf*
MUST);
bq.add(new TermQuery(new Term("account", myAccount)), Occur.MUST);
bq.add(new TermRangeQuery("date", minDate, maxDate, false, false),
Occur.MUST);
and so on.
Bye
*Raf*
On Mon, Jun 20, 2011 at 5:54 PM, Hiller, Dean x66079 <
dean.hil...@broadridge.com> wro
You can simply use a KeywordAnalyzer for your NOT_ANALYZED fields.
This analyzer, in fact, does not modify your input.
Regards,
*Raf*
On Mon, Jun 20, 2011 at 5:12 PM, G.Long wrote:
> Ok, I'll try this.
>
> But will it work if one of the fields has no analyzers assigned ?
>
>
= reader.reopen(true);
if (newReader != reader) {
reader.close();
reader = newReader;
searcher = new IndexSearcher(reader);
}
instead of
reader.reopen(true);
Bye.
*Raf*
On Sun, Jan 16, 2011 at 11:06 AM, sol myr wrote:
> Hi,
>
> Thank you kindly for replying.
> Unfortunately, reopen() doesn
, implementing *nextDoc* and *
advance* methods accordingly to AND/OR semantic.
We use something like that and, for very sparse bitsets, it is more
efficient than to convert them in *OpenBitSets* in order to perform AND/OR
operations.
Bye
*Raf*
Hi,
I think you should use another IndexWriter constructor:
IndexWriter(Directory d, Analyzer a, IndexWriter.MaxFieldLength mfl)
Constructs an IndexWriter for the index in d, first *creating it
if it does not already exist*.
Hope this helps.
Bye
Raf
On Sun, Jan 24, 2010 at 4:48 AM
u will normally find less documents!
If you want to search all documents that contains A and B, you should write
the query as +A +B (or change the default operator for query parser from OR
to AND).
Bye
Raf
On Sat, Oct 31, 2009 at 5:58 AM, Hrishikesh Agashe <
hrishikesh_aga...@persistent.co.in&
{
return (this.myBitset);
}
}
}
In Lucene 2.4.1 the output is:
Filter extraction:
Extracted: 1 --> b
Extracted: 2 --> c
Extracted: 6 --> y
Searcher extraction:
Extracted: 1 --> b
Extracted: 2 --> c
Extracted: 6 --> y
while in Lucene 2.9 I have:
Filter extraction:
Extracted: 1 --> b
Extracted: 2 --> c
Extracted: 6 --> y
Searcher extraction:
Extracted: 1 --> b
Extracted: 2 --> c
Extracted: 6 --> y
Extracted: 7 --> z
Is it a bug in the new Lucene searcher or am I missing something?
Thanks,
Bye
Raf
over a new "pattern" (while we are using our system) we
will have to reindex the documents...
Using the regex approach, instead, we can configure the pattern we want to
identify for each domain and simply to change the configuration when we find
a new pattern.
Anyway, thank you for your sugg
hile (rte.term() != null) {
System.out.println(rte.term() + " " + rte.docFreq());
rte.next();
count++;
}
assertEquals(1, count);
... ... ...
I find this a bit confusing, but at least I have solved my problem now :)
Thank you very much Erick.
Bye
Field.Index.NOT_ANALYZED));
doc.add(new Field("contents", "contenuto documento 3",
Field.Store.YES, Field.Index.NOT_ANALYZED));
writer.addDocument(doc);
writer.optimize();
writer.close();
}
}
What am I missing?
Thanks.
Bye,
Raf
409 ms 2,470 ms
*2 Consolidated index (1 index)*
2a Range [2009010100 - 20090131235959] --> 379,560 docs
444 ms 72 ms 72 ms
2b Range [2008120100 - 20090131235959] --> 974,754 docs
576 ms 208 ms 140 ms
2c Range [2008100100 - 20090131235959] --> 2,197,590 do
on the production environment, so I think I will have to
consolidate indexes for now.
Thanks a lot for your help,
Raf
If you are interested, here you can find the new test code and a result
comparison between 2.4.1 and 2.9:
*RangeFilter searcher test*
@Test
public void testRangeFilterSearch
. So I think that my best
choice, at the moment, is to consolidate my indexes and waiting until this
interesting new feature will be available in the official release.
Thanks a lot to all of you,
Raf
On Fri, Apr 10, 2009 at 10:13 PM, Uwe Schindler wrote:
> You got a lot of answers and questi
No, it is a MultiReader that contains 72 (I am sorry, I wrote a wrong number
last time) "single" readers.
Raf
On Fri, Apr 10, 2009 at 9:14 PM, Mark Miller wrote:
> Raf wrote:
>
>>
>> We have more or less 3M documents in 24 indexes and we read all of them
>>
640 ms 159 ms 138 ms
2c Range [2008100100 - 20090131235959] --> 2,197,590 docs
817 ms 322 ms295 ms
The field on which I am applying the RangeFilter is a date field and it has
299,622 unique terms.
Thanks,
Raf
On Fri, Apr 10, 2009 at 7:54 PM, Michael McCandless <
lu
.
Raf
On Fri, Apr 10, 2009 at 4:48 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:
> Unfortunately, in Lucene 2.4, any query that needs to enumerate Terms
> (Prefix, Wildcard, Range, etc.) has poor performance on Multi*Readers.
> I think the only workaround is to merge your
search using this index, it takes only a small
fraction of the previous time (about 2s).
Is there something we can do to improve search performance using
RangeFilters with MultiReader or the only solution is to have only a single
big index?
Thanks,
Raf
22 matches
Mail list logo