Thank you all for your time and writing!
This will really help me.
Lukas
On 11/21/06, adasal [EMAIL PROTECTED] wrote:
Thanks for link and your write up.
On 19/11/06, Shay Banon [EMAIL PROTECTED] wrote:
Since I do not want to invade Lucene user list regarding a discussion
about
Compass
Paul,
We are working on delivering the next release by the end of the week so
I have to take care of 2 or 3 issues before I try the nightly build.
I promise to try it and report the results here.
Best,
Stanislav
Paul Elschot wrote:
Stanislav,
Could you also try a nightly build to test the
On Nov 21, 2006, at 10:34 PM, Antony Bowesman wrote:
On the field specific fields, I want to control the parsing to
ensure that the parser will not interpret fields in the user
entered string, so in those fields it treats : as : all of the
time. However, in the free form field, anything
Hi Erick,
Thanks for your help...
I have successfully implemented using custom HitCollector
- Bhavin pandya
- Original Message -
From: Erick Erickson [EMAIL PROTECTED]
To: java-user@lucene.apache.org; Bhavin Pandya [EMAIL PROTECTED]
Sent: Tuesday, November 21, 2006 8:58 PM
checking one last thing, just in case...
as I mentioned, I have previously indexed the same document in another
index (for another purpose), as I am going to use the same analyzer,
would it be possible to avoid analyzing the doc again?
I see IndexWriter.addDocument() returns void, so it does
Hi all:
I read on this a list many threads about Lucene indexing framework
integration with Oracle.
http://www.gossamer-threads.com/lists/lucene/java-user/41104?search_string=oracle%20jvm%20BLOB;#41104
So it push me to work in a Lucene and Oracle JVM (a Java virtual
machine running inside the
Very interesting.
So how does this solution manage mapping Oracle primary keys to and from Lucene
doc ids?
Another benefits of using the Data Cartridge API is that if the
table T1 has insert, update or delete rows operations a corresponding
Java method will be called to automatically update
Hi Mark:
Very interesting.
So how does this solution manage mapping Oracle primary keys to and from Lucene
doc ids?
I am storing the rowid value as a Document field, here a code sniped
Document doc = new Document();
doc.add(new Field(rowid, rowid,
Hi, Marcelo,
Yes, putting it in the public space would be great. I personally would
be very interested to have a look. Can it be posted on the 'lucene'
website?
Vlad
-Original Message-
From: Marcelo Ochoa [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 22, 2006 8:10 AM
To:
Hi Vladimir:
Well, I finishing with the implementation of the ancillary operator
score() and the contains function ready to use outside the SQL where
expression, for example:
select score(1),colx,coly from t1 where contains(f2,'test',1)=1
select contains(f2,'test') from t1
Then I'll move the
Sorry if I'm missing the point here, but what about simply replacing colons
with spaces first?
Michael.
-Original Message-
From: Antony Bowesman [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 21, 2006 10:01 PM
To: java-user@lucene.apache.org
Subject: Re: Limiting QueryParser
I've also got to ask a similar question to Michael's... Who is the UI
intended for? If it's intended for any type of end user, even other IT
folks, who aren't Lucene junkies, trying to explain when colon's count and
when they don't is going to be a challenge. I predict it will lead to 1
endless
I've never tried it, but I guess you could write an Analyzer and
TokenFilter that no only feeds into IndexWriter on
IndexWriter.addDocument(), but as a sneaky side effect also
simultaneously saves its tokens into a list so that you could later
turn that list into another TokenStream to be
Hello all.
Previous message lost somewhere, resending...
Index in my application has about 15M docs and about 6Gb in size;
I want to implement sorting on some fields, but using default approach
FieldCache size can be quite large.
But I want to keep application footprint small.
So question is:
Marcelo Ochoa wrote:
Then I'll move the code outside the lucene-2.0 code tree to be
packed as subdirectory of the contrib area, for example.
Other alternative is to make an small zip file and send it to the
list as attach as a preliminary (alpha-alpha version ;)
This sounds like great
Michael Rusch wrote:
Sorry if I'm missing the point here, but what about simply replacing colons
with spaces first?
Michael.
Err, thanks. I've been in too deep at the wrong end :) Wood, trees and
visibility spring to mind!
Antony
Erik Hatcher wrote:
It doesn't seem like you need a parser at all for your field-specific
search fields. Simply tokenize, using a Lucene Analyzer, the text field
and build up a BooleanQuery of all the tokens.
That's what I'm currently doing, but I was getting bogged down with trying to
Out of interest, I've checked an implementation of something like
this into AnalyzerUtil SVN trunk:
/**
* Returns an analyzer wrapper that caches all tokens generated by
the underlying child analyzer's
* token stream, and delivers those cached tokens on subsequent
calls to
*
Hi,all
Maybe lots of u guys have used google mail. i noticed google mail has a
good feature called labels. you could select a mail then just put labels on
it by select more actions-apply label. The labels could be created by
users dynamicly . Then you could list your mails just by these
Hi,Emmanuel
i think you did a very greate job! Since i am now working on a system that
using lucene to implement a search engine, i would like to know some more
details about Hinbernate Search.
I have read some of the code in Hinbernate 3.2GA release, the code is
pretty cool, but there is one
Wow, very cool, even though I don't use Oracle anywhere at the moment.
You probably don't want that rowid field tokenized, by the way.
Otis
- Original Message
From: Marcelo Ochoa [EMAIL PROTECTED]
To: java-user@lucene.apache.org
Sent: Wednesday, November 22, 2006 8:44:58 AM
Subject: Re:
21 matches
Mail list logo