The method BooleanQuery.add( Query q, BooleanClause.Occur o) accepts
Query objects that are null for its first parameter i.e. it doesn't
throw any exception. However, when we try to get the string form of the
same BooleanQuery object, it throws a NullPointerException from within
the toString() code
You can also try using a ConstantScoreRangeQuery in lieu of the plain
RangeQuery.
http://lucene.apache.org/java/docs/api/org/apache/lucene/search/ConstantScoreRangeQuery.html
Regards,
Venu
-Original Message-
From: Mile Rosu [mailto:[EMAIL PROTECTED]
Sent: Monday, June 12, 2006 5:20 PM
T
I am sorry about the previous mail. It turns out that I was confusing
the *stored* value of the field with the *indexed* value. The indexed
value is what I expect it to be, alright.
Thanks all,
Venu
-Original Message-
From: Satuluri, Venu_Madhav
Sent: Thursday, June 08, 2006 10:08 PM
To
Hi all,
It seems to me my Fields aren't getting analyzed before they are stored
in the index. I am sure I am overlooking some obvious point here, but
cant figure out what that is. I recently migrated to Lucene2.0 from
Lucene 1.4.3, and my fields used to get indexed earlier, so maybe I am
missing s
ence the empty set.
if you truely want a "NOT foo" style clause which you can then combine
with other queries in a true boolean set fashion, you'll need to use a
MatchAllDocs query or soemthing like it...
name:xyz (field_every_doc_has_set_to_true:true -name:xyz)
: Date: Tue, 2
Hi all,
I build Query objects programmatically. I do this by getting a
TermQuery/PhraseQuery/whatever for each term in the user query, make a
BooleanClause by specifying isRequired and isProhibited depending on
whether the term has an "and" or an "or" or an "or not" etc prefixed
before it (I use 1
programmatically?
On May 18, 2006, at 8:08 AM, Satuluri, Venu_Madhav wrote:
> Is there any way to run my Query object through my analyzer? Or is
> there
> another solution?
But of course. Have a look at the source code to
QueryParser.getFieldQuery() - it does this very thing. I'm
Hi all,
I have recently shifted to creating queries programmatically rather than
using the QueryParser as this gave me more flexibility. I am facing a
new problem, though: when indexing my fields are being analyzed (on a
per-field basis: most are being stemmed etc, some are keywords returned
as it
Hi,
I don't think it should cause any conflicts in the index itself (the
indexing process proper is decoupled from the analyzing), and if you can
decide as to which analyzer to use when you're searching based on the
field/kind of search, then it should be fine.
Regards,
Venu
-Original Messa
Try using luke to see how the document actually is in the index.
http://www.getopt.org/luke/
-Venu
-Original Message-
From: trupti mulajkar [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 02, 2006 7:41 PM
To: java-user@lucene.apache.org
Subject: Re: creating indexReader object
thanx hann
ore performant than indexing all 'A join B'
documents.
>
> Any commenters?
>
> Jelda
>
> > -Original Message-
> > From: Satuluri, Venu_Madhav [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, April 13, 2006 6:15 PM
> > To: java-user@lucene.apache.or
I think you are asking if we can retain 1:n relationships in lucene.
Ok, I'll go out on a limb and give my solution. Say you have a table A
and table B with B having multiple rows associated to each row in A.
Also your documents are centered around A, i.e. all your queries return
some row(s) of A,
Red Piranha: http://red-piranha.sourceforge.net/
-Original Message-
From: Delip Rao [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 05, 2006 6:53 PM
To: java-user@lucene.apache.org
Subject: searching offline
Hi,
I have a large collection of text documents that I want to search
using l
will work with 1.4.3, so just grab that
class and put it into your project. A couple of variations of it are
also included with the Lucene in Action code.
Erik
On Apr 5, 2006, at 7:52 AM, Satuluri, Venu_Madhav wrote:
> Hi,
>
> I am using lucene 1.4.3. Some of my fields
Hi,
I am using lucene 1.4.3. Some of my fields are indexed as Keywords. I
also have subclassed Analyzer inorder to put stemming etc. I am not sure
if the input is tokenized when I am searching on keyword fields; I don't
want it to be. Do I need to have a special case in the overridden method
(Anal
You need to make sure that both the indexing and searching process use
the same lock directory.
-Original Message-
From: Supriya Kumar Shyamal [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 05, 2006 4:16 PM
To: java-user@lucene.apache.org
Subject: FS lock on NFS mounted filesystem for
I believe there's a MatchAllDocsQuery class from Lucene 1.9 onwards. You
can run this query to get all documents.
If you are not using 1.9, to my knowledge, you would have to add a
redundant field that would true for all documents and query on that
field. Something like Field.Keyword("AllDocsTrue"
Are you asking that common words not be searched? For this, you can use
StopFilter to prevent words from being indexed and searched.
Alternatively, you can use StandardAnalyzer, which in addition to
removing stop words also does more sophisticated tokenizing.
Venu
-Original Message-
From
hanks,
Venu
-Original Message-
From: Satuluri, Venu_Madhav
Sent: Wednesday, March 22, 2006 7:36 PM
To: java-user@lucene.apache.org
Subject: RE: Errors when searching index and writing to index
simultaenously
> Make sure both the indexing process and the searcher process use th
second type of exception
I mentioned in my earlier mail. That is, IndexSearcher is returning a
Hits
object, but Hits.doc() is throwing an exception ("Bad file number").
And, as I said, the index is getting corrupted whenever this happens.
Any ideas?
-Venu
-Original Message-
From: Satul
Hi,
If I run IndexSearcher.search() at the same time an IndexWriter is
adding a document to the index, I get the following kind of exception
frequently:
java.io.FileNotFoundException: /_3j.fnm (No such file or
directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.Ra
#x27;t retrieve all
the documents at once.
Also, what are the size of your fields and how many fields do you have
per document?
Have you done any profiling to find the bottlenecks? An index size of
50mb is actually pretty small for Lucene, perhaps you can share more
about your setup.
-
Hi,
I am looking for ways to improve the performance of lucene search in our
app. Lucene performance is visibly slow when there are a lot of
documents to be returned (performance almost seems directly proportional
to the number of documents returned by Searcher). However, we show 20
results per pa
index in sync
On Mon, Mar 13, 2006 at 06:23:10PM +0530, Satuluri, Venu_Madhav wrote:
> Hi,
>
> Is there an elegant way to keep RAMDirectory and my file-system based
> index in sync? I have a java class that is periodically started up by
> crond that checks for modified documents and
Hi,
Is there an elegant way to keep RAMDirectory and my file-system based
index in sync? I have a java class that is periodically started up by
crond that checks for modified documents and then reindexes them onto
the filesystem. However, for searching I want to use RAMDirectory (for
the performan
> Query at = new TermQuery(new Term("alwaysTrueField","true));
> Query user = queryParser.parse(userInput);
> if (user instanceof BooleanQuery) {
> BooleanQuery bq = (BooleanQuery)user;
> if (! usableBooleanQuery(bq)) {
> bq.add(at, true, false); /* add 'always true' clause
Hi,
The following query does not work as expected for me:
"alwaysTrueField:true (-name:john)"
neither does this:
"alwaysTrueField:true +(-name:john)"
It returns zero results, despite there being many documents without name
john. (alwaysTrueField is, needless to say, true for all documents).
This
27 matches
Mail list logo