should mostly compile
with 3.6 - but this depends on the features used and the complexity of your
code). In addition, Lucene 4 can no longer read indexes from Lucene 2.9. If you
want to reuse your already built indexes, you have to upgrade to 3.6 in any
case as an extra step during data
Are you able to reindex the data from source? Typical practices around
search indexes is to treat them as secondary stores for full-text search
that mirrors a primary database or data store.
-Doug
On Thu, Mar 20, 2014 at 12:52 PM, NarasimhaRao DPNV <
narasimha.jav...@gmail.com> wrote:
> Hi
>
>
Hi
I started migrating my lucene search application from 2.9 version to 4.7.0
. Please suggest me the best way and best practices for this. There are
many files to rewrite.
Thank you,
Narasimha.
addAttribute in your constructor.
The simplest way to see why this is good: imagine if someone was to
use your TokenFilter with say a WhitespaceTokenizer that does not add
PayloadAttribute. Then your filter would not produce any error, the
PayloadAttribute would just be empty as you expect.
The reas
rosaka [mailto:k...@basistech.com]
> Sent: Sunday, December 05, 2010 12:05 AM
> To: java-user@lucene.apache.org
> Subject: Re: PayloadAttribute behavior change between Lucene 2.9/3.0 and
> the trunk
>
> Thank you, Robert, substituting getAttribute with addAttribute worked!
>
> But
Thank you, Robert, substituting getAttribute with addAttribute worked!
But I don't understand why. Could you help me to understand the mechanics?
In my setting,
hasAttribute(PayloadAttribute.class) returns false.
So I thought addAttribute(PayloadAttribute.class) would just
create a new PayloadA
On Fri, Dec 3, 2010 at 10:15 PM, Teruhiko Kurosaka wrote:
> Hello,
> I have a Tokenizer that generates a Payload, and a TokenFilter that uses it.
> These work well with Solr 1.4.0 (therefore Lucene 2.9.1?), but when
> I switched to the trunk version (I rebuilt the Tokenizer and TokenFilter
> using
Hello,
I have a Tokenizer that generates a Payload, and a TokenFilter that uses it.
These work well with Solr 1.4.0 (therefore Lucene 2.9.1?), but when
I switched to the trunk version (I rebuilt the Tokenizer and TokenFilter
using the Lucene jar from the trunk and ran it), I encountered with
this
Hey Nikolay,
On Thu, May 27, 2010 at 11:00 AM, Nikolay Zamosenchuk
wrote:
> Hi, Dear colleagues!
> I have one question concerning IndexReader.getSequentialSubReaders()
> and it's usage.
getSequentialSubReaders() was introduced to support Per-Segment Search
in Lucene 2.9. It is
Hi, Dear colleagues!
I have one question concerning IndexReader.getSequentialSubReaders()
and it's usage.
Imagine there is a class extending DirectoryReader or MultiReader.
Usually directory- or multi-reader consists of sub-readers (i.e.
segment-readers). Is it safe enough to return always null in
Gregory Tarr wrote:
>
>> How easy is it to influence the score of search results in lucene 2.9?
>>
>> The situation is that we have a large number of dated documents that
>> match the term "john" but we want to return the latest documents when
>> "john&
> I am using lucene 2.9 and I can't understand why a succession of
> un-deprecated methods calls a deprecated method in this class.
> The series of calls is as follows:
>
> Searcher.search(Query, Collector)
> IndexSearcher.search(Weight, Filter, Collector)
&
od ways of dealing with this.
Best
Erick
On Wed, May 12, 2010 at 1:04 PM, Gregory Tarr wrote:
> How easy is it to influence the score of search results in lucene 2.9?
>
> The situation is that we have a large number of dated documents that
> match the term "john" but we wan
How easy is it to influence the score of search results in lucene 2.9?
The situation is that we have a large number of dated documents that
match the term "john" but we want to return the latest documents when
"john" is the search term.
My solution to this would be to ove
I am using lucene 2.9 and I can't understand why a succession of
un-deprecated methods calls a deprecated method in this class.
The series of calls is as follows:
Searcher.search(Query, Collector)
IndexSearcher.search(Weight, Filter, Collector)
Scorer.score(Collector)
DocIdSetIterator.ne
become smaller.
> - The optimized index has practically the same size as the not optimized one.
>
> Yuliya
>
>> -Ursprüngliche Nachricht-
>> Von: Michael McCandless [mailto:luc...@mikemccandless.com]
>> Gesendet: Freitag, 8. Januar 2010 14:38
>> An: java-user@l
cht-
> Von: Michael McCandless [mailto:luc...@mikemccandless.com]
> Gesendet: Freitag, 8. Januar 2010 14:38
> An: java-user@lucene.apache.org
> Betreff: Re: Lucene 2.9 and 3.0: Optimized index is thrice as
> large as the not optimized index
>
> Lucene stores 1 byte (disk
ht-
> Von: Michael McCandless [mailto:luc...@mikemccandless.com]
> Gesendet: Freitag, 8. Januar 2010 14:38
> An: java-user@lucene.apache.org
> Betreff: Re: Lucene 2.9 and 3.0: Optimized index is thrice as
> large as the not optimized index
>
> Lucene stores 1 byte (disk and RAM,
sparsely"?
>
> Thanks,
> Yuliya
>
>> -Ursprüngliche Nachricht-
>> Von: Michael McCandless [mailto:luc...@mikemccandless.com]
>> Gesendet: Donnerstag, 7. Januar 2010 18:00
>> An: java-user@lucene.apache.org
>> Betreff: Re: Lucene 2.9 and 3.0: Optimi
:00
> An: java-user@lucene.apache.org
> Betreff: Re: Lucene 2.9 and 3.0: Optimized index is thrice as
> large as the not optimized index
>
> Do your documents have many different indexed fields? If you
> do, and norms are enabled, that could be the cause (norms are
> not
gt; -Ursprüngliche Nachricht-
>> Von: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
>> Gesendet: Donnerstag, 7. Januar 2010 17:35
>> An: java-user@lucene.apache.org
>> Betreff: Re: Lucene 2.9 and 3.0: Optimized index is thrice as
>> large as the not optimize
7, 2010 11:50:29 AM
> Subject: AW: Lucene 2.9 and 3.0: Optimized index is thrice as large as the
> not optimized index
>
> Otis,
>
> thanks for the answer.
>
> Unfortunatelly the index *directory* remains larger *after" the optimization.
> In our case the otimization was
achricht-
> Von: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
> Gesendet: Donnerstag, 7. Januar 2010 17:35
> An: java-user@lucene.apache.org
> Betreff: Re: Lucene 2.9 and 3.0: Optimized index is thrice as
> large as the not optimized index
>
> Yuliya,
>
> T
Do you have a reader open on the index which was opened before your
your index was optimized? Maybe there is a reader around holding on
the references to the merged segments.
simon
On Thu, Jan 7, 2010 at 5:23 PM, Yuliya Palchaninava wrote:
> Hi,
>
> According to the api documentation: "In genera
Message
> From: Yuliya Palchaninava
> To: "java-user@lucene.apache.org"
> Sent: Thu, January 7, 2010 11:23:08 AM
> Subject: Lucene 2.9 and 3.0: Optimized index is thrice as large as the not
> optimized index
>
> Hi,
>
> According to the api document
Hi,
According to the api documentation: "In general, once the optimize completes,
the total size of the index will be less than the size of the starting index.
It could be quite a bit smaller (if there were many pending deletes) or just
slightly smaller". In our case the index becomes not small
On Thu, Dec 31, 2009 at 12:34 PM, Kumaravel Kandasami
wrote:
> Identified the problem.
>
> reader.close() was not getting called in a specific logic flow.
Phew :) Thanks for bringing closure.
Mike
-
To unsubscribe, e-mail: jav
Identified the problem.
reader.close() was not getting called in a specific logic flow.
Thank You.
Kumar_/|\_
www.saisk.com
ku...@saisk.com
"making a profound difference with knowledge and creativity..."
On Thu, Dec 31, 2009 at 11:11 AM, Kumaravel Kandasami <
kumaravel.kandas...@gmail.co
Thanks Mike.
I think it is something to do with the merge factor.
Modified the code to do optimize in the finally block the following error
message was thrown.
Code Snippet:
nameWriter.optimize(); // errors here
nameWriter.close();
valueWriter.optimize(); //I am using mult
It sounds like you may be running out of file descriptors -- how many
segments are in your index?
The reopen logic looks correct (you are closing the old reader). Is
there anything else that may be holding files open?
Have you changed any of IW's settings, eg mergeFactor?
Mike
On Wed, Dec 30,
I am getting IOException when I am doing a "Real-time" search, i.e. I am
creating a Index using the Index Writer and also opening the Index using
Index Reader (writer.getReader()) to make sure the document does not exist
prior adding to the Index file.
The code works perfect fine multiple time ind
0, 2009 6:37 PM
> To: java-user@lucene.apache.org
> Subject: Re: Performance problems with Lucene 2.9
>
> The problem with this method is that I won't be able to know how many
> total
> results / pages a search have?
>
> For example if I do a search X that returns 1,00
; > > useful, because the first 200 hits cannot be ranked.
> > > >
> > > > -
> > > > Uwe Schindler
> > > > H.-H.-Meier-Allee 63, D-28213 Bremen
> > > > http://www.thetaphi.de
> > > > eMail: u...@thetaphi.de
> > >
ot be ranked.
> >> >
> >> > -
> >> > Uwe Schindler
> >> > H.-H.-Meier-Allee 63, D-28213 Bremen
> >> > http://www.thetaphi.de
> >> > eMail: u...@thetaphi.de
> >> >
> >> > > -Original Message-
st 200 hits cannot be ranked.
> > >
> > > -
> > > Uwe Schindler
> > > H.-H.-Meier-Allee 63, D-28213 Bremen
> > > http://www.thetaphi.de
> > > eMail: u...@thetaphi.de
> > >
> > > > -Original Message-
> > >
esults, TopDocs is not
>> > very
>> > useful, because the first 200 hits cannot be ranked.
>> >
>> > -
>> > Uwe Schindler
>> > H.-H.-Meier-Allee 63, D-28213 Bremen
>> > http://www.thetaphi.de
>> > eMail: u.
e
> > eMail: u...@thetaphi.de
> >
> > > -Original Message-
> > > From: Michel Nadeau [mailto:aka...@gmail.com]
> > > Sent: Monday, November 30, 2009 5:35 PM
> > > To: java-user@lucene.apache.org
> > > Subject: Re: Performance proble
009 5:35 PM
> > To: java-user@lucene.apache.org
> > Subject: Re: Performance problems with Lucene 2.9
> >
> > I'll definitely switch to a Collector.
> >
> > It's just not clear for me if I should use BooleanQueries or
> > MatchAllDocuments+Fi
0, 2009 5:35 PM
> To: java-user@lucene.apache.org
> Subject: Re: Performance problems with Lucene 2.9
>
> I'll definitely switch to a Collector.
>
> It's just not clear for me if I should use BooleanQueries or
> MatchAllDocuments+Filters ?
>
> And should I wri
f you replace a relational database with Lucene, be sure not to
>> > think
>> > > in a relational sense with foreign keys / primary keys and so on. In
>> > > general
>> > > you should flatten everything.
>> > >
>> > > Uwe
>
u should flatten everything.
> > >
> > > Uwe
> > >
> > > -
> > > Uwe Schindler
> > > H.-H.-Meier-Allee 63, D-28213 Bremen
> > > http://www.thetaphi.de
> > > eMail: u...@thetaphi.de
> > >
> > >
> > &g
n
> > general
> > you should flatten everything.
> >
> > Uwe
> >
> > -
> > Uwe Schindler
> > H.-H.-Meier-Allee 63, D-28213 Bremen
> > http://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> >
> > > -Ori
t; Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: Michel Nadeau [mailto:aka...@gmail.com]
> > Sent: Monday, November 30, 2009 5:10 PM
> > To:
o: java-user@lucene.apache.org
> Subject: Re: Performance problems with Lucene 2.9
>
> What is the main difference between Hits and Collectors?
>
> - Mike
> aka...@gmail.com
>
>
> On Mon, Nov 30, 2009 at 11:03 AM, Uwe Schindler wrote:
>
> > And if you only hav
w.thetaphi.de
> eMail: u...@thetaphi.de
>
>
> > -Original Message-
> > From: Shai Erera [mailto:ser...@gmail.com]
> > Sent: Monday, November 30, 2009 4:56 PM
> > To: java-user@lucene.apache.org
> > Subject: Re: Performance problems with Lucene 2.9
> >
>
oleanQuery that is added BooleanClauses, each with is Term (field:value)?
> You can add clauses w/ OR, AND, NOT etc.
>
> Note that in Lucene 2.9, you can avoid scoring documents very easily, which
> is a performance win if you don't need scores (i.e. if you just want to
> match ev
.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de
> -Original Message-
> From: Shai Erera [mailto:ser...@gmail.com]
> Sent: Monday, November 30, 2009 4:56 PM
> To: java-user@lucene.apache.org
> Subject: Re: Performance problems with Luc
Collectors instead.
If I understand the chain of filters, do you think you can code them with a
BooleanQuery that is added BooleanClauses, each with is Term (field:value)?
You can add clauses w/ OR, AND, NOT etc.
Note that in Lucene 2.9, you can avoid scoring documents very easily, which
is a perfo
Hi,
we use Lucene to store around 300 millions of records. We use the index both
for conventional searching, but also for all the system's data - we replaced
MySQL with Lucene because it was simply not working at all with MySQL due to
the amount or records. Our problem is that we have HUGE perform
Hi, you can find this in 'lucene-misc' contrib jar file
http://lucene.apache.org/java/2_9_1/api/contrib-misc/org/apache/lucene/misc/ChainedFilter.html
On Thu, Nov 19, 2009 at 11:27 PM, Michel Nadeau wrote:
> Hi !
>
> Can someone tell me what is replacing ChainedFilter in Luc
Hi !
Can someone tell me what is replacing ChainedFilter in Lucene 2.9?
I used to do it like this -
h = searcher.search(q, cluCF, cluSort);
Where cluCF is a ChainedFilter declared like this -
Filter cluCF = new ChainedFilter(cluFilters, ChainedFilter.AND);
cluFilters is a Filter[] containing
I noticed that this question has been asked but I could not find good answer
so I am posting again. Is there a good example of sorting and pagination
wtih Lucene 2.9. I have looked at Solr 1.4 source code for examples and put
together some code for testing but it's not quite working.
I
e
eMail: u...@thetaphi.de
> -Original Message-
> From: John Wang [mailto:john.w...@gmail.com]
> Sent: Sunday, November 08, 2009 12:36 AM
> To: java-user@lucene.apache.org
> Subject: lucene 2.9+ numeric indexing
>
> Hi guys:
>
> Running into a strange problem:
Hi guys:
Running into a strange problem:
I am indexing into a field a numeric string:
int n = Math.abs(rand.nextInt(100));
Field myField = new Field(MY_FIELD,String.valueOf(n),Store.NO,Index.
NOT_ANALYZED_NO_NORMS);
myField.setOmitTermFreqAndPositions(true);
doc.add(myFi
With the recent release of Apache Lucene 2.9, Lucid Imagination has put
together an in-depth technical white paper on the range of performance
improvements and new features (per segment indexing, trierange numeric
analysis, and more), along with recommendations for upgrading your
Lucene
Hi -
FYI, Lucid's just put out a two white papers, one on Apache Lucene 2.9 and
one on Apache Solr 1.4:
- "What's New in Lucene 2.9" covers range of performance improvements and
new features (per segment indexing, trierange numeric analysis, and more),
along with recommend
:03 AM, wrote:
>
>> *Description*
>>
>>
>>
>> ______
>>
>> Free Webinar: Apache Lucene 2.9: Discover the Powerful New Features
>> ---
>>
>> Join us for a fre
Is there a recording of the Webinars for anyone who's missed it?
On Sat, Sep 19, 2009 at 12:03 AM, wrote:
> *Description*
>
>
>
> __
>
> Free Webinar: Apache Lucene 2.9: Discover
Hi Michael,
If you just want the top "n" hits (the way you used to use the Hits
class), just call
TopDocs topDocs = Searcher.search(query, n);
Don't worry about the Collector interface unless you actually need it.
-jake
On Sat, Oct 10, 2009 at 1:12 PM, M R wrote:
> Hi
>
> This is the
Hi
This is the example given on the deprecated Hits class about using the new
TopScoreDocCollector class :
TopScoreDocCollector collector = new TopScoreDocCollector(hitsPerPage);
searcher.search(query, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
for (int i = 0; i < hits
s the reuse of internal cache structures and prevents a large
amount of objects from being garbage collected.
Beside runtime improvements and many changes of expert APIs, Lucene
2.9 introduces a new "TokenStream" API. The new API introduces
stronger typing and enables developers to
Hey Lucene Users,
Heise.de (
http://www.heise.de/open/artikel/Such-Engine-Lucene-in-Version-2-9-erschienen-810377.html)
has just published an article about the new 2.9 release.
Unfortunately they only published the german version while we tried to get
the english one too. Thanks to Isabel (http://
Hallo Rajiv2,
The LocalLucene from Sourceforge is not index-compatible to the recently
added spatial contrib in Lucene. You have to reindex your spatial values
(because the index format now makes use of the new Lucene 2.9 NumericField,
which is now the standard for numeric fields).
Uwe
The required format for contrib/spatial has changed to NumericField,
as of 2.9. Are you building your index with NumericField?
Mike
On Fri, Oct 2, 2009 at 2:04 PM, Rajiv2 wrote:
>
> Hello, I was using Lucene 2.4 and locallucene in my app and upgraded to
> lucene 2.9 and I'm
Hello, I was using Lucene 2.4 and locallucene in my app and upgraded to
lucene 2.9 and I'm using the new spatial contrib package. I've switched
everything from using the locallucene specific classes to using the lucene
spation classes for indexing and searching. Everything compiles b
use the internal
ids (eg assume id 0 is the first doc and that there is only one id 0
when building a filter - the filter has to just work respective to any
IndexReader given it - not make any assumptions about ids).
Raf wrote:
> Hello,
> I have tried to switch my application from Lucene 2.4.1 t
On Fri, Oct 2, 2009 at 7:09 AM, Raf wrote:
> Hello,
> I have tried to switch my application from Lucene 2.4.1 to Lucene 2.9, but I
> have found a problem.
> My searcher uses a MultiReader and, when I try to do a search using a custom
> filter based on a bitset, it does not beha
Hello,
I have tried to switch my application from Lucene 2.4.1 to Lucene 2.9, but I
have found a problem.
My searcher uses a MultiReader and, when I try to do a search using a custom
filter based on a bitset, it does not behave as it did in Lucene 2.4.
It looks like the new searcher does not use
index using
compound file after updating it. I was doing that because if not I
could
feel a lot performance loss in search responses.
Now in Lucene 2.9 there are per segment readers and I have read
something
about it performes better and maybe there's no need to optimze
always the
index
Hey there,
Until now when using Lucene 2.4 I was always optimizing my index using
compound file after updating it. I was doing that because if not I could
feel a lot performance loss in search responses.
Now in Lucene 2.9 there are per segment readers and I have read something
about it performes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hello Lucene users,
On behalf of the Lucene dev community (a growing community far larger
than just the committers) I would like to announce the release of
Lucene 2.9.
While we generally try and maintain full backwards compatibility
between major
Has anyone received a link with the slides from the presentation yet?
-Mike
On Fri, Sep 18, 2009 at 3:56 PM, Erik Hatcher wrote:
> Free Webinar: Apache Lucene 2.9: Discover the Powerful New Features
> ---
>
> Join u
BEGIN:VCALENDAR
METHOD:REPLY
PRODID:Microsoft Exchange Server 2007
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Pacific Standard Time
BEGIN:STANDARD
DTSTART:16010101T02
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=1SU;BYMONTH=11
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
END:VTIMEZONE
BEGIN:VEVENT
ATTENDEE;PARTSTAT=ACCEPTED;CN=Kishore AVK. Veleti:MAILTO:kisho...@coreobjec
ts.com
COMMENT:
SUMMARY:Accepted: Free Webinar - Apache Lucene 2.9: Technical Overview of N
ew Features
DTSTART;TZID=India Standard Time:20090924T233000
DTEND;TZID=India Standard Time
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello Lucene users,
On behalf of the Lucene dev community (a growing community far larger
than just the committers) I would like to announce the fifth (and
hopefully last) release candidate for Lucene 2.9.
Please download and check it out - take it
:20090924T18Z
LAST-MODIFIED:20090919T073908Z
PRIORITY:5
SEQUENCE:0
SUMMARY:Accepted: Free Webinar - Apache Lucene 2.9: Technical Overview of N
ew Features
TRANSP:OPAQUE
UID:E3143EC4F95E2C65852576350079297D-Lotus_Notes_Generated
X-MICROSOFT-CDO-BUSYSTATUS:BUSY
X-MICROSOFT-CDO-IMPORTANCE:1
X-MS
BEGIN:VCALENDAR
METHOD:REPLY
PRODID:Microsoft Exchange Server 2007
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Pacific Standard Time
BEGIN:STANDARD
DTSTART:16010101T02
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=1SU;BYMONTH=11
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
ACTION;RSVP=TRUE
:mailto:solr-u...@lucene.apache.org
CLASS:PUBLIC
DESCRIPTION;ALTREP="CID:":_
_________\n\nFree Webinar: Apache Lucene 2.9
: Discover the Powerful New Features\n
---\n\nJoin us for a free and
Free Webinar: Apache Lucene 2.9: Discover the Powerful New Features
---
Join us for a free and in-depth technical webinar with Grant
Ingersoll, co-founder of Lucid Imagination and chair of the Apache
Lucene PMC.
Thursday
> Mark Miller wrote:
> > Hello Lucene users,
> >
> > ...
> >
> > We let out a bug in the lock factory changes we made in RC3 -
> > making a new SimpleFSDirectory with a String param would throw
> > an illegal state exception - a fix for this is in RC4.
>
> My apologies - not SimpleFSDirectory, but
Mark Miller wrote:
> Hello Lucene users,
>
> ...
>
> We let out a bug in the lock factory changes we made in RC3 -
> making a new SimpleFSDirectory with a String param would throw
> an illegal state exception - a fix for this is in RC4.
My apologies - not SimpleFSDirectory, but SimpleFSLockFactory
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello Lucene users,
On behalf of the Lucene dev community (a growing community far larger
than just the committers) I would like to announce the fourth release
candidate for Lucene 2.9.
Please download and check it out - take it for a spin and kick
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello Lucene users,
On behalf of the Lucene dev community (a growing community far larger
than just the committers) I would like to announce the third release
candidate for Lucene 2.9.
Please download and check it out – take it for a spin and kick
> http://svn.apache.org/viewvc?view=rev&revision=630698
This may be it. The scorer is sparse and usually in a conjuction with a
dense scorer.
Does the index format matter? I haven't yet built it with 2.9.
Peter
On Wed, Sep 9, 2009 at 10:17 AM, Yonik Seeley wrote:
> On Wed, Sep 9, 2009 at 9:40 AM
>Is it possible that skipTo is very costly with your custom scorer?
It's no more expensive than 'next'. The scorer's 'skipTo' and 'next' methods
call termdocs.skipTo or termdocs.next to get the next 'candidate' doc. This
just checks a BitVector to find the next non-deleted doc. But the scorer
mus
On Wed, Sep 9, 2009 at 9:40 AM, Peter Keegan wrote:
> IndexSearcher.search is calling my custom scorer's 'next' and 'doc' methods
> 64% fewer times. I see no 'advance' method in any of the hot spots'. I am
> getting the same number of hits from the custom scorer.
> Has the BooleanScorer2 logic chan
Right, BooleanQuery will now try to use BooleanScorer (does "out of
order" collection, which does not use skipTo/advance at all, I think)
when possible, instead of BooleanScorer2.
This only applies for boolean queries that have only SHOULD clauses,
and up to 32 MUST_NOT clauses (if there's even 1
How about the new score inorder/out of order stuff? It was an option
before, but I think now it uses whats best by default? And pairs with
the collector? I didn't follow any of that closely though.
- Mark
Peter Keegan wrote:
> IndexSearcher.search is calling my custom scorer's 'next' and 'doc' me
IndexSearcher.search is calling my custom scorer's 'next' and 'doc' methods
64% fewer times. I see no 'advance' method in any of the hot spots'. I am
getting the same number of hits from the custom scorer.
Has the BooleanScorer2 logic changed?
Peter
On Wed, Sep 9, 2009 at 9:17 AM, Yonik Seeley <
On Wed, Sep 9, 2009 at 9:17 AM, Yonik
Seeley wrote:
> On Wed, Sep 9, 2009 at 8:57 AM, Peter Keegan wrote:
>> Using JProfiler, I observe that the improvement
>> is due to a huge reduction in the number of calls to TermDocs.next and
>> TermDocs.skipTo (about 65% fewer calls).
>
> Indexes are searched
On Wed, Sep 9, 2009 at 8:57 AM, Peter Keegan wrote:
> Using JProfiler, I observe that the improvement
> is due to a huge reduction in the number of calls to TermDocs.next and
> TermDocs.skipTo (about 65% fewer calls).
Indexes are searched per-segment now (i.e. MultiTermDocs isn't normally used).
O
,
>
> On behalf of the Lucene dev community (a growing community far larger
> than just the committers) I would like to announce the second release
> candidate for Lucene 2.9.
>
> Please download and check it out – take it for a spin and kick the
> tires. If all goes well, we hope t
Hi All:
I am already have integrated Lucene 2.9RC2 with Lucene Domain Index:
http://docs.google.com/Doc?id=ddgw7sjp_54fgj9kg
As usual, a new Lucene version do a fastest product :)
All my internal test runs OK and I only need to re-test on 10g database.
Once Lucene 2.9 is ready for
Mark Miller wrote:
>
> Download release candidate 1 here:
> http://people.apache.org/~markrmiller/staging-area/lucene2.9rc2/
>
In case anyone catches - yes that is a cut and paste typo - should read
release candidate 2 (obvious, but just to cross my t's).
--
- Mark
http://www.lucidimagination.co
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello Lucene users,
On behalf of the Lucene dev community (a growing community far larger
than just the committers) I would like to announce the second release
candidate for Lucene 2.9.
Please download and check it out – take it for a spin and kick
The dist build issues have been addressed and RC2 will include the
missing analyzer and db contrib binaries.
Unfortunately, people.apache.org is not up at the moment
(https://blogs.apache.org/infra/entry/apache_org_downtime_initial_report),
but I will put up Lucene 2.9 RC2 when it comes back up
Apologies - you are correct - contrib/analyzers is in src but not the
jar distrib. I will address whatever is up with the build process and
put up another RC when apache servers are back up.
Thanks for pointing this out,
- Mark
Bogdan Ghidireac wrote:
> Thank you, Lucene 2.9 is a great rele
Thank you, Lucene 2.9 is a great release...
I have one issue so far - I cannot find the contrib/analyzers jars,
only the sources are present.
Bogdan
On Fri, Aug 28, 2009 at 1:17 AM, Mark Miller wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hello Lucene users,
>
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello Lucene users,
On behalf of the Lucene dev community (a growing community far larger
than just the committers) I would like to announce the first release
candidate for Lucene 2.9.
Please download and check it out – take it for a spin and kick
I hope July. Could easily be August though. I'm kicking and screaming to get
it out soon though. Its been hurting my high brow reputation.
On Tue, Jun 30, 2009 at 2:41 PM, Siraj Haider wrote:
> is there an ETA for Lucene 2.9 release?
&g
1 - 100 of 141 matches
Mail list logo