Hi all,
Has anyone been through this issue?
I have documents that have one or more values for the same field. For example:
doc1 = new Document();
doc1.add(new Field("Letter"), "A", ...);
doc1.add(new Field("Letter"), "C", ...);
// doc1.add(other fields);
//write to index
Now I add another docum
I've tested ConstantScorePrefixQuery and it hit right in the head. It's now
mind-boggling fast! Even a query that has 200.000 matches was under 0.5
seconds!
Thanks! :))
Andre
On Tue, Sep 2, 2008 at 10:44 AM, Mark Miller <[EMAIL PROTECTED]> wrote:
> Andre Rubin wrote:
>
On Tue, Sep 2, 2008 at 10:16 AM, Mark Miller <[EMAIL PROTECTED]> wrote:
> Andre Rubin wrote:
>
>> Hi all,
>>
>> Most of our queries are very simple, of the type:
>>
>> Query query = new PrefixQuery(new Term(LABEL_FIELD, prefix));
>> Hits hits
Hi all,
Most of our queries are very simple, of the type:
Query query = new PrefixQuery(new Term(LABEL_FIELD, prefix));
Hits hits = searcher.search(query, new Sort(new SortField(LABEL_FIELD)))
Which sometimes result in 10, 20, sometimes 40 thousand hits.
I get good performance if hits.length is
Hey all
I have 2 indexes. Both have an ID field and one or more String fields... I
want to merge these indexes by merging the documents on each index that
match their IDs.
For exmaple:
Index 1:
Doc1:
id: 1234 (*)
text:bla bla
text:abcd
Index2:
DocA:
id:1234 (*)
text:xyz
(*) ID's match
So I w
So, you mean you're gonna be removing the deprecated methods from the api?
Andre
On Tue, Aug 26, 2008 at 3:59 PM, Karl Wettin <[EMAIL PROTECTED]> wrote:
>
> 27 aug 2008 kl. 00.52 skrev Darren Govoni:
>
> Hi,
>> Sorry if I missed this somewhere or maybe its not released yet, but I
>> was anxiou
(type);
QueryParser parser = new QueryParser(TYPE_FIELD, ANALYZER);
Query tq = parser.parse(TYPE_FIELD + ":" + escapedType);
Andre
On Tue, Aug 26, 2008 at 10:19 AM, Daniel Naber <
[EMAIL PROTECTED]> wrote:
> On Dienstag, 26. August 2008, Andre Rubin wrote:
>
> > Now I was
n,
Andre
On Tue, Aug 26, 2008 at 9:34 AM, Daniel Naber <[EMAIL PROTECTED]
> wrote:
> On Dienstag, 26. August 2008, Andre Rubin wrote:
>
> > I just have one more use case. I want the same prefix search as before,
> > plus another match in another field.
>
> Not sure
returns a
BooleanQuery.
Thanks again,
Andre
On Tue, Aug 26, 2008 at 2:37 AM, Daniel Naber <[EMAIL PROTECTED]
> wrote:
> On Dienstag, 26. August 2008, Andre Rubin wrote:
>
> > For some reason, the TermQuery is not returning any results, even when
> > querying for a single w
For some reason, the TermQuery is not returning any results, even when
querying for a single word (like on*).
query = new TermQuery(new Term(LABEL_FIELD, searchString));
On 8/25/08, Daniel Naber <[EMAIL PROTECTED]> wrote:
> On Montag, 25. August 2008, Andre Rubin wrote:
>
>>
o luck (I think I did it wrong). In any
case, is MultiPhraseQuery what I'm looking for? If it is, how should I use
the MultiPhraseQuery class?
Thanks,
Andre
-- Forwarded message --
From: Andre Rubin <[EMAIL PROTECTED]>
Date: Thu, Aug 21, 2008 at 2:21 AM
Subject: Re:
Just to add to that, as I said before, in my case, I found more useful not
to use UN_Tokenized. Instead, I used Tokenized with a custom analyzer that
uses the KeywordTokenizer (entire input as only one token) with the
LowerCaseFilter: This way I get the best of both worlds.
public class KeywordLow
Sergey,
Based on a recent discussion I posted:
http://www.nabble.com/Searching-Tokenized-x-Un_tokenized-td18882569.html
, you cannot use Un_Tokenized because you can't have any analyzer run
thorugh it.
My suggestion, use a tokenized filed and a custom made Analyzer.
Haven't figure out all the det
;s applied to that 1 Token.
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: Andre Rubin <[EMAIL PROTECTED]>
>> To: java-user@lucene.apache.org
>> Sent: Wednesday, August 13, 2008 12:15:25 AM
&g
--
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: Andre Rubin <[EMAIL PROTECTED]>
>> To: java-user@lucene.apache.org
>> Sent: Tuesday, August 12, 2008 5:30:47 PM
>> Subject: Re: Searching Tokenized x Un
, untokenized means "full string" -
> it requires an "exact match".
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>> From: Andre Rubin <[EMAIL PROTECTED]>
>> To: java-us
I'm new to Lucene, and I've been reading a lot of messages regarding
deleting docs. But I think my problem is more basic. I can't delete docs
from my index and (after the index is created the first time and the writer
is closed) I can't add new documents to an existing index.
Sorry for the lengthy
Hi all,
When I switched a String field from tokenized to untokenized, some
searches started not returning some obvious values. Am I missing
something on querying untokenized fields? Another question is, do I
need an Analyzer if my search is on an Untokenized field, wouldn't the
search be based on
okenized. Sorting on this
> field instead, Lucene will treat "North Carolina" as one token and sort
> as you'd expect. The downside to this approach is that you will have to
> juggle the two fields in the future.
>
> - Mark
>
> Andre Rubin wrote:
>> Hi t
Hi there!
I'm new to Lucene, so forgive any misconceptions on my part.
I created an Index and now I want to search on it based on a field.
The field is a String field and Field.Store.YES and
Field.Index.TOKENIZED. No problems with the search.
Now, I wanted to sort the results, and according to t
20 matches
Mail list logo