Hi,
Is there a way to delete the results from a query or a filter and not
documents specified by Term. I have seen some explanations here but i do not
know how to do it:
http://www.nabble.com/Batch-deletions-of-Records-from-index-tf615674.html#a1644740
Thanks in advanced
testn,
here is my code but the thing is strange is that by Luke I can't reach my
goal as well,
look, I have a field (Indexed, Tokenized and Stored) this field has a wide
variety of values from numbers to characters, I give the query
patientResult:oxalate but the result is no document (using
White
Hi,
I have done using this:
final QueryParser filterQueryParser = new QueryParser("", new
KeywordAnalyzer());
hits = indexSearcher.search(query, new
QueryWrapperFilter(filterQueryParser.parse(filterQ
: My lucene query: fieldName:"pinki i" finds document. (see "i" in "pinki")
i'm guessing that in this debuging output you provided...
: > indexed value: pink-I
: > Indexed tokens:1: [pink:0->5] 2: [pinki:0->5] 3: [i:5->6]
: > (ex. explanation:
: > "pink" is a term "0->5" term-position)
...tha
What version of Lucene are you using?
On Aug 17, 2007, at 12:44 PM, [EMAIL PROTECTED] wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello community, dear Grant
I have build a JUnit test case that illustrates the problem -
there, I try to cut
out the right substring with the offset v
I've noticed a few threads on this so far... maybe it's useful or maybe
somebody's already done this, or maybe it's insane and bug-prone.
Anyways, our application requires lucene to act as a non-critical
database, as in each record is composed of denormalized data derived
from the real DBMS. The
Hi,
On Aug 16, 2007, at 2:20 PM, Lokeya wrote:
Hi All,
I have the following set up: a) Indexed set of docs. b) Ran 1st
query and
got tops docs c) Fetched the id's from that and stored in a data
structure.
d) Ran 2nd query , got top docs , fetched id's and stored in a data
structure.
N
I'm currently trying to figure how I could provide a Lucene-based search
functionality to an existing system. Though the application is hosted in
multiple boxes, they do NOT share a SAN where we can put the index directory.
Each of the nodes need to update Lucene documents but it's not going to
Ignore the part about "much longer strings", I overlooked that this
was a single term
But it still works on my machine, Lucene 2.1...
Erick
On 8/17/07, Michael McCandless <[EMAIL PROTECTED]> wrote:
>
>
> Hmmm ... good catch. With DocumentsWriter there is a max term length
> (currently 16384
I've added MUCH larger strings to a document without any problem,
but it was an FSDir. I admit that it is kind of "interesting" that this
happens just as you cross the magic number.
But I tried it on my machine and it works just fine, go figure ..
Erick
On 8/17/07, karl wettin <[EMAIL PROTECTED]
Try searching the mail archives for SynonymMap, as I know this was
discussed a while ago but don't remember the specifics.
Erick
On 8/17/07, Antonius Ng <[EMAIL PROTECTED]> wrote:
>
> Hi all,
>
> I'd like to add more words into SynonymMap for my application, but the
> HashMap that holds all the
Hmmm ... good catch. With DocumentsWriter there is a max term length
(currently 16384 chars). I think we should fix it to raise a clearer
exception? I'll open an issue.
Mike
On Fri, 17 Aug 2007 19:53:09 +0200, "karl wettin" <[EMAIL PROTECTED]> said:
> When I add a field containing a really lo
Sure. I'd recommend that you start by taking out our custom
tokenizer and looking at what Lucene does rather than what you've
tried to emulate. For instance, the StandardTokenizer returns
offsets that are one more than the end of the previous token. That is,
the following program (Lucene 2.1)
imp
When I add a field containing a really long term I get an AIOOBE. Is
this a documented feature?
public static void main(String[] args) throws Exception {
RAMDirectory dir = new RAMDirectory();
IndexWriter iw = new IndexWriter(dir, new StandardAnalyzer
(Collections.emptySet()), true);
Hi all,
I'd like to add more words into SynonymMap for my application, but the
HashMap that holds all the words is not visible (private).
Is there any other Class that I can use to implement SynonymAnalyzer? I am
using Lucene version 2.2.0
Antonius Ng
Hi Erick,
Thanks.
Here I try here my best to provide Pseudo code.
Indexed Value: "pink-i"
I have used a Custom Analyzer. My Analyzer looks a littlebit like
following..
public class KeyWordFilter extends TokenFilter{
public KeyWordFilter(TokenStream in) {
super(in);
keyword
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello community, dear Grant
I have build a JUnit test case that illustrates the problem - there, I try to
cut
out the right substring with the offset values given from Lucene - and fail :(
A few remarks:
In this example, the 'é' from 'Bosé' makes t
You'd get much better answers if you posted a concise example
(or possibly code snippets), especially including the analyzers you
used.
Have you used Luke to examine your index and see if it's indexed as
you expect?
Best
Erick
On 8/17/07, Ramana Jelda <[EMAIL PROTECTED]> wrote:
>
> Strangely..
>
Strangely..
My lucene query: fieldName:"pinki i" finds document. (see "i" in "pinki")
Jelda
> -Original Message-
> From: Ramana Jelda [mailto:[EMAIL PROTECTED]
> Sent: Friday, August 17, 2007 12:33 PM
> To: java-user@lucene.apache.org
> Subject: Issue with indexed tokens position
>
>
Hi,
Lucene doesn't find following value. Some issues with PhraseQuery.
indexed value: pink-I
Indexed tokens:1: [pink:0->5] 2: [pinki:0->5] 3: [i:5->6] (ex. explanation:
"pink" is a term "0->5" term-position)
And I have indexed in a field called "fieldName".
My lucene search with the query [fieldN
20 matches
Mail list logo