t; place).
> > > >
> > > > In searchAfter we pass in an "after" doc so I was wondering if that
> > > changes
> > > > how a query is optimized at all. By looking at the code, I'm thinking
> > no
> > > > but was wondering if there were any other parameters here that I am
> not
> > > > aware of that would influence query optimization differently in
> > > > search/searchAfter. Thanks!
> > > >
> > >
> > >
> > > --
> > > Adrien
> > >
> >
>
>
> --
> Adrien
>
ter provided the index stays the same (no CRUD takes
> > place).
> > >
> > > In searchAfter we pass in an "after" doc so I was wondering if that
> > changes
> > > how a query is optimized at all. By looking at the code, I'm thinking
>
there were any other parameters here that I am not
> > aware of that would influence query optimization differently in
> > search/searchAfter. Thanks!
> >
>
>
> --
> Adrien
>
By looking at the code, I'm thinking no
> but was wondering if there were any other parameters here that I am not
> aware of that would influence query optimization differently in
> search/searchAfter. Thanks!
>
--
Adrien
code, I'm thinking no
but was wondering if there were any other parameters here that I am not
aware of that would influence query optimization differently in
search/searchAfter. Thanks!
,
>
> I got some Lucene indexes in my project, mostly of them are created once and
> updated, not so frequently, about once a week or monthly. The indexes sizes
> are about 20GB and as more inserts are done the indexes grow, so I'd like to
> know what the best index opti
Hello folks,
I got some Lucene indexes in my project, mostly of them are created once and
updated, not so frequently, about once a week or monthly. The indexes sizes are
about 20GB and as more inserts are done the indexes grow, so I'd like to know
what the best index optimization strate
Hi..
I am building a search for my application. For the entered search term
(foo),
1) I look for exact match (foo), if it returns NULL
2) I use fuzzy search (foo~), if it returns NULL
3) I use wildcard (foo*).
Is this an efficient way? Or is there any lucene method to do all these?
Thanks.
Ok! I will open an issue in JIRA then.
On Saturday, August 13, 2016 3:26 PM, Adrien Grand wrote:
The explanation makes sense, I think you're right. Even though I don't
think this optimization would be used often, it would certainly help
performance when it is used.
Le sam. 13 août
The explanation makes sense, I think you're right. Even though I don't
think this optimization would be used often, it would certainly help
performance when it is used.
Le sam. 13 août 2016 à 12:21, Spyros Kapnissis a
écrit :
> Ok, I had some time to look a bit further into it.
ut is "(X X Y #X)" w/minshouldmatch=2
> ... pretty sure that would give you very diff scores if you rewrote it to
> "(+X X Y)" (or "(+X Y)") w/minshouldmatch=1
>
>
>
> : Hello all, I noticed while debugging a query that BooleanQuery will
> : re
le for all possible permutations/values
> ... i'd have to think about it.
>
> An interesting edge case to think about is "(X X Y #X)" w/minshouldmatch=2
> ... pretty sure that would give you very diff scores if you rewrote it to
> "(+X X Y)" (or "(+X Y)
se to think about is "(X X Y #X)" w/minshouldmatch=2
... pretty sure that would give you very diff scores if you rewrote it to
"(+X X Y)" (or "(+X Y)") w/minshouldmatch=1
: Hello all, I noticed while debugging a query that BooleanQuery will
: rewrite itself to remo
ot;(X X Y #X)" w/minshouldmatch=2
... pretty sure that would give you very diff scores if you rewrote it to
"(+X X Y)" (or "(+X Y)") w/minshouldmatch=1
: Hello all, I noticed while debugging a query that BooleanQuery will
: rewrite itself to remove FILTER clauses that
Hello all,
I noticed while debugging a query that BooleanQuery will rewrite itself to
remove FILTER clauses that are also MUST as an optimization/simplification,
which makes total sense. So (+f:x #f:x) will become (+f:x).
However, shouldn't there also be another optimization to remove F
On 07/13/2016 12:43 AM, Siraj Haider wrote:
We currently use Lucene 2.9 and to keep the indexes running faster we optimize
the indexes during night. In our application the volume of new documents coming
in is very high so most of our indexes have to merge segments during the day
too, when the
We currently use Lucene 2.9 and to keep the indexes running faster we optimize
the indexes during night. In our application the volume of new documents coming
in is very high so most of our indexes have to merge segments during the day
too, when the document count reaches certain number. This ca
If you have many deletes on the index (not typical for a time-based index)
then forceMerge (or just forceMergeDeletes) will reclaim disk space.
Fewer file handles will be needed to open the index.
Some searches may be faster, but you should test in your case if that's
really the case. Much progr
Hello,
I am using indexes that can be as large as 25 Gb.
Indexes are created for a specific time window (for instance it can be weekly
based).
Once the week is passed they are not written to anymore.
I have seen the IndexWriter.forceMerge(int) operation, and I had several
questions:
- Afte
Hi,
I would suggest to read: http://www.searchworkings.org/blog/-/blogs/380798
In general, if the index changes often, don't force merges. IndexWriter
automatically merges to a suitable number of segments.
Uwe
Gili Nachum schrieb:
>Hi there Lucene samurai!
>
>*I was wondering how important
Hi there Lucene samurai!
*I was wondering how important is single segment merging for search time
performance compared to a more modest merging goal like merging down to
just 4 segment.
*
Currently my system merges every evening, it takes ~1-2 hours, and
invalidates the File-system cache.
What wo
We have been facing a critical problem which affecting the production on
> customer sites, the problem is while optimization taking place on larger
> indices of size > 2 GB, the indexer threads getting into blocked state,
> since index writer opened for optimization purpose is never gettin
Waits of several hours on a 4Gb index sounds very unlikely. Are you
sure there isn't something else going on that is blocking things?
What version of lucene? Decent, error-free, hardware?
As for optimize, I'd skip it altogether, or schedule it occasionally
when there is no or low activity on the
Hi,
Our Lucene index grew to about 4 GB .
Unfortunately it brought up a performance problem of slow file merging.
We have:
1. A writer thread: once an Hour it looks for modified documents, and
updates the Lucene index.
Usually there are only few modifications, but sometimes we switch the
entire co
Hi,
That is as expected. When IndexReader or IndexSearcher are open, the snapshot
of this index is preserved until you reopen it, as all readers only see the
index in the state when it was opened, so disk space is still acquired and on
windows you even see the files. For optimize (what you shou
New information: it appears that the index size increasing (not always
doubling but going up significantly) occurs when I search the index while
building it. Calling indexWriter.optimize(1, true); when I'm done adding
documents sometimes reduces the index down to size, but not always.
Has anyon
IndexWriter.setInfoStream -- when you set that, it produces lots of
verbose output detailing what IW is doing to the index...
Mike
On Wed, Feb 9, 2011 at 8:06 PM, Phil Herold wrote:
> I didn't have any errors or exceptions. Sorry to be dense, but what exactly
> is the "infoStream output" you're
I didn't have any errors or exceptions. Sorry to be dense, but what exactly
is the "infoStream output" you're asking about?
>This is not expected.
>
>Did the last IW exit "gracefully"? If so, it should delete the old
>segments after swapping in the optimized one.
>Can you post infoStre
ene index can double while optimization is
> underway, but it's supposed to eventually settle back down to the original
> size, correct? We have a Lucene index consisting of 100K documents, that is
> normally about 12GB in size. It is split across 10 sub-indexes which we
> search usi
I know that the size of a Lucene index can double while optimization is
underway, but it's supposed to eventually settle back down to the original
size, correct? We have a Lucene index consisting of 100K documents, that is
normally about 12GB in size. It is split across 10 sub-indexes whi
Have you considered having fewer indexes, each storing data for
multiple users? Obviously with some indexed field that you can use
for restricting searches to data for that user.
I believe that is more common practice for this sort of scenario and
is known to work well. You seem to be adding pos
Thank you, Ian
I have a large number of dynamically changing Index, so calling
expungeDeletes() and optimize() is very costly.
At this point I am opting to just set a optimum merge factor and skip
optimize()
On Tue, Oct 5, 2010 at 2:54 PM, Ian Lea wrote:
> Deleted docs will be removed by lucene
Deleted docs will be removed by lucene at some point - there is no
need to run optimize.
Read the javadocs for IndexWriter for details. See also
expungeDeletes(). That may be just what you need.
--
Ian.
On Tue, Oct 5, 2010 at 7:48 AM, Naveen Kumar wrote:
> Hi
> I have one more question, does
Hi
I have one more question, does Lucene purge the deleted documents before
merging the segments, or purging of deleted documents done only when
optimized?
On Thu, Sep 30, 2010 at 4:59 PM, Naveen Kumar wrote:
> Hi
> I have a Very large number (say 3 million) of frequently changing Small
> index
Hi
I have a Very large number (say 3 million) of frequently changing Small
indexes. 90% of these indexes contain about 50 documents, while a few 2-3%
indexes have about 100,000 documents each (these being the more frequently
used indexes).
Each index belongs to a signed in user, thus can have unpre
Message-
From: Danil TORIN [mailto:torin...@gmail.com]
Sent: Friday, September 24, 2010 12:01 AM
To: java-user@lucene.apache.org
Subject: Re: In lucene 2.3.2, needs to stop optimization?
Is it possible for you to migrate to 2.9.x ? Or even 3.x ?
There are some huge optimization in 2.9 on reopening
Is it possible for you to migrate to 2.9.x ? Or even 3.x ?
There are some huge optimization in 2.9 on reopening indexes that
significantly improve search speed.
I'm not sure..but I think indexWriter.getReader() for almost realtime
was added to 2.9, so you can keep your writer always open an
be appreciated,
Lisheng
-Original Message-
From: Zhang, Lisheng [mailto:lisheng.zh...@broadvision.com]
Sent: Thursday, September 23, 2010 6:11 PM
To: java-user@lucene.apache.org
Subject: In lucene 2.3.2, needs to stop optimization?
Hi,
We are using lucene 2.3.2, now we need to index e
Hi,
We are using lucene 2.3.2, now we need to index each document as
fast as possible, so user can almost immediately search it.
So I am considering stop IndexWriter optimization during real time,
then in relatively off-time like late night we may call IndexWriter optimize
method explicitly
- Original Message
> From: Jamie Band
> To: java-user@lucene.apache.org
> Sent: Tue, November 10, 2009 11:43:30 AM
> Subject: Lucene index write performance optimization
>
> Hi There
>
> Our app spends alot of time waiting for Lucene to finish writing to the
> ind
On Tue, Nov 10, 2009 at 11:43 AM, Jamie Band wrote:
> As an aside note, is there any way for Lucene to support simultaneous writes
> to an index?
The indexing process is highly parallelized... just use multiple
threads to add documents to the same IndexWriter.
-Yonik
http://www.lucidimagination.
You might try re-implementing, using ThreadPoolExecutor
http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ThreadPoolExecutor.html
glen
2009/11/10 Jamie Band :
> Hi There
>
> Our app spends alot of time waiting for Lucene to finish writing to the
> index. I'd like to minimize this. If y
Hi There
Our app spends alot of time waiting for Lucene to finish writing to the
index. I'd like to minimize this. If you have a moment to spare, please
let me know if my LuceneIndex class presented below can be improved upon.
It is used in the following way:
luceneIndex = new
LuceneIndex(C
ubject: Re: Index files not deleted after optimization
> On Tue, Nov 3, 2009 at 9:45 AM, Ganesh wrote:
>> My IndexReader and Searcher is open all the time. I am reopening it at
>> constant interval.
>>
>> Below are the code sequence.
>>
>> 1. DB optimize
>
On Tue, Nov 3, 2009 at 9:45 AM, Ganesh wrote:
> My IndexReader and Searcher is open all the time. I am reopening it at
> constant interval.
>
> Below are the code sequence.
>
> 1. DB optimize
> 2. Close writer
> 3. Open writer
> 4. Reopen new reader
> 5. Close old reader
> 6. Close old searcher.
09 3:22 PM
Subject: Re: Index files not deleted after optimization
It depends on the relative timing.
If the old IndexReader is still open when the optimize completes then
the files it has open cannot be deleted.
But, if that IndexReader hadn't been reopened in a while, it's
possible it d
se();
> searcher.close();
>
> Regards
> Ganesh
>
> - Original Message -
> From: "Michael McCandless"
> To:
> Sent: Monday, November 02, 2009 6:03 PM
> Subject: Re: Index files not deleted after optimization
>
>
> Something must still have these file ha
nt: Monday, November 02, 2009 6:03 PM
Subject: Re: Index files not deleted after optimization
Something must still have these file handles open at the time the
optimization completed.
EG do you have a reader open on this index?
Mike
On Mon, Nov 2, 2009 at 6:54 AM, Ganesh wrote:
> Hello all,
&
Something must still have these file handles open at the time the
optimization completed.
EG do you have a reader open on this index?
Mike
On Mon, Nov 2, 2009 at 6:54 AM, Ganesh wrote:
> Hello all,
>
> I am using Lucene 2.4.1 and My app is running inside Tomcat.
>
> In Windows,
Hello all,
I am using Lucene 2.4.1 and My app is running inside Tomcat.
In Windows, after database optimization, the old db files are not getting
deleted. I enabled the info stream and found the below entries. I used
ProcessExplorer from SysInternals to view the lock file, but old db files are
: Subject: Why perform optimization in 'off hours'?
: In-Reply-To:
: <5b20def02611534db08854076ce825d8032db...@sc1exc2.corp.emainc.com>
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please
Thanks for the reply.
I suspected that was the case, I was just wondering if there was something more
to it.
- Original Message
> From: Shai Erera
> To: java-user@lucene.apache.org
> Sent: Monday, August 31, 2009 10:28:41 AM
> Subject: Re: Why perform optimization i
e the
optimize() process itself may take several hours, so that a nightly job
won't be enough.
Shai
On Mon, Aug 31, 2009 at 6:25 PM, Ted Stockwell wrote:
> Hi All,
>
> I am new to Lucene and I was reading 'Lucene in Action' this weekend.
> The book recommends that optimiza
Hi All,
I am new to Lucene and I was reading 'Lucene in Action' this weekend.
The book recommends that optimization be performed when the index is not in use.
The book makes it clear that optimization *may* be performed while indexing but
it says that optimizing while indexing make
Mike, thanks very much for your comments! I won't have time to try these
ideas for a little while but when I do I'll definitely post the results.
On Fri, Aug 7, 2009 at 12:15 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:
> On Thu, Aug 6, 2009 at 5:30 PM, Nigel wrote:
> >> Actually I
> > From: Laxmilal Menariya
> > To: java-user@lucene.apache.org
> > Sent: Monday, August 10, 2009 3:23:17 AM
> > Subject: Taking too much time in optimization
> >
> > Hello everyone,
> >
> > I have created a sample application & indexing fil
://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
- Original Message
> From: Laxmilal Menariya
> To: java-user@lucene.apache.org
> Sent: Monday, August 10, 2009 3:23:17 AM
> Subject: Taking too much time in optimization
>
> Hell
Hello everyone,
I have created a sample application & indexing files properties, have index
appx 107K files.
I am getting OutofMemoryError after 100K while indexing, got the cause from
MaxBuffereddocs=100K, but after that I am calling optimize() method, this is
taking too much time appx 12-HRS,
On Thu, Aug 6, 2009 at 5:30 PM, Nigel wrote:
>> Actually IndexWriter must periodically flush, which will always
>> create new segments, which will then always require merging. Ie
>> there's no way to just add everything to only one segment in one
>> shot.
>>
>
> Hmm, that makes sense now that you
On Wed, Aug 5, 2009 at 3:50 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:
> On Wed, Aug 5, 2009 at 12:08 PM, Nigel wrote:
> > We periodically optimize large indexes (100 - 200gb) by calling
> > IndexWriter.optimize(). It takes a heck of a long time, and I'm
> wondering
> > if a more
On Wed, Aug 5, 2009 at 12:08 PM, Nigel wrote:
> We periodically optimize large indexes (100 - 200gb) by calling
> IndexWriter.optimize(). It takes a heck of a long time, and I'm wondering
> if a more efficient solution might be the following:
>
> - Create a new empty index on a different filesyste
We periodically optimize large indexes (100 - 200gb) by calling
IndexWriter.optimize(). It takes a heck of a long time, and I'm wondering
if a more efficient solution might be the following:
- Create a new empty index on a different filesystem
- Set a merge policy for the new index so it puts eve
look at http://issues.apache.org/jira/browse/LUCENE-1567 (New flexible
query parser)
This new parser allows for internally rewrites/optimizes Query, and it
is backward compatible.
Preetham Kajekar wrote:
Hi,
I am wondering if Lucene internally rewrites/optimizes Query. I am
programatically
: > ((Src:Testing Dst:Test) (Src:Test2 Port:http)).
: > In this case, would Lucene optimize to remove the unwanted BooleanQueries ?
: Alas, Lucene in general does not do such structural optimization (and
: I agree, we should). EG we could do it during Query.rewrite().
Except that flat
Thanks for the response ! Will post my findings.
Thx,
~preetham
Michael McCandless wrote:
Alas, Lucene in general does not do such structural optimization (and
I agree, we should). EG we could do it during Query.rewrite().
There are certain corner cases that are handled, eg a BooleanQuery
Alas, Lucene in general does not do such structural optimization (and
I agree, we should). EG we could do it during Query.rewrite().
There are certain corner cases that are handled, eg a BooleanQuery
with a single BooleanClause, or BooleanQuery where
minimumNumberShouldMatch exceeds the number
Hi,
I am wondering if Lucene internally rewrites/optimizes Query. I am
programatically generating Query based on various user options, and
quite often I have BooleanQueri'es wrapped inside BooleanQueries etc.
Like,
((Src:Testing Dst:Test) (Src:Test2 Port:http)).
In this case, would Lucene optim
"solr user" forum, but didn't get a clear
answer.
So re-post it here.)
Our index size is about 60G. Most of the time, the optimization
works fine.
But this morning, the optimization kept creating new segment files
until all
the free disk space (300G) was used up.
Here is how t
(I posted this question to "solr user" forum, but didn't get a clear answer.
So re-post it here.)
Our index size is about 60G. Most of the time, the optimization works fine.
But this morning, the optimization kept creating new segment files until all
the free disk space (300G) was
There is not enough information here to even guess
at an answer. Please post the stack trace and any
other relevant information you can think of and maybe
there'll be some useful pointers people can give.
Best
Erick
On Mon, Feb 2, 2009 at 7:21 PM, Scott Smith wrote:
> I'm optimizing a database a
I'm optimizing a database and getting the error:
maxClauseCount is set to 1024
I understand what that means coming out of the query parser, but what
does it mean coming from the optimizer?
Scott
Lucene implements ACID (like modern databases), with the restriction
that only one transaction may be open at a time.
So, once commit (your step 4) is called and succeeds, Lucene
guarantees that any prior changes (eg your step 2) are written to
stable storage and will not be lost ("durability").
Hi,
I was reading the 2.4 javadoc as well as other sources but couldn't
find clear answer.
I need to know whether the sequence
(1) open index writer -> (2) write something to index -> (3)
optimize index -> (4) commit
can corrupt the index / lose the data written at the point of (2)
after (4) is
n 2.4. We talked a while back on the
> dev list about doing releases more frequently. I'll start a thread on
> the dev list to see what people think...
>
> Mike
>
> -
> To unsubscribe, e-mail: [EM
mattspitz wrote:
Is there no way to ensure consistency on the disk with 2.3.2?
Unfortunately no.
This is a little off-topic, but is it worth upgrading to 2.4 right
now if
I've got a very stable system already implemented with 2.3.2? I don't
really want to introduce oddities because I'm u
lcome! I agree: optimizing seek time seems likely to be the
> biggest win.
>
>> Will a faster disk cache access affect the optimization and
>> merging? I
>> don't really have a sense for what of the segments are kept in
>> memory during
>>
disk.
You're welcome! I agree: optimizing seek time seems likely to be the
biggest win.
Will a faster disk cache access affect the optimization and
merging? I
don't really have a sense for what of the segments are kept in
memory during
a merge. It doesn't make sense to me th
Mike-
Are the index files synced on writer.close()?
Thank you so much for your help. I think the seek time is the issue,
especially considering the high merge factor and the fact that the segments
are scattered all over the disk.
Will a faster disk cache access affect the optimization and
mattspitz wrote:
So, my indexing is done in "rounds", where I pull a bunch of
documents from
the database, index them, and flush them to disk. I manually call
"flush()"
because I need to ensure that what's on disk is accurate with what
I've
pulled from the database.
On each round, then,
gt;
>
>
> - Original Message
>> From: mattspitz <[EMAIL PROTECTED]>
>> To: java-user@lucene.apache.org
>> Sent: Saturday, August 16, 2008 4:07:52 AM
>> Subject: Appropriate disk optimization for large index?
>>
>>
>> Hi! I'
-- Lucene - Solr - Nutch
- Original Message
> From: mattspitz <[EMAIL PROTECTED]>
> To: java-user@lucene.apache.org
> Sent: Saturday, August 16, 2008 4:07:52 AM
> Subject: Appropriate disk optimization for large index?
>
>
> Hi! I'm using Lucene 2.3.2
for your help,
Matt
--
View this message in context:
http://www.nabble.com/Appropriate-disk-optimization-for-large-index--tp19009580p19009580.html
Sent from the Lucene - Java Users mailing list archive at Nabble.com.
-
To unsub
: My understanding is that an optimized index gives the best search
there is an inherent inconsistency in your question -- yo usay you
optimize your index before using it becuase you heard thta makes searches
faster, but in your orriginal question you said...
> I'd like to shorten the time it
I'll run some tests. Thank you.
> From: [EMAIL PROTECTED]
> To: java-user@lucene.apache.org
> Subject: Re: Index optimization ...
> Date: Wed, 30 Jul 2008 11:12:28 -0400
>
> What version of Lucene are you using? What is your current
> mergeFactor? Lowering this (mi
my index gets fully optimized every 4 hours and the time it takes to
fully optimize the index is longer than I like. Is there anything
that I can do to speed up the optimization? I don't fully understand
the different parameters (e.g. merge factor). If I decrease the
merge factor, would
CTED]> wrote:
> My understanding is that an optimized index gives the best search
> performance. I can change my configuration to optimize the index every 24
> hours. However, I still would like to know if there is a way to speed up
> optimization by tweaking parameters like the mer
t;new" inactive upto speed
to compensate for the documents it missed while the "old" Inactive index got
upated?
Just curious,
Anand
-Original Message-
From: Dragon Fly <[EMAIL PROTECTED]>
Date: Wed, 30 Jul 2008 10:00:25
To:
Subject: RE: Index optimization ...
I have t
My understanding is that an optimized index gives the best search performance.
I can change my configuration to optimize the index every 24 hours. However, I
still would like to know if there is a way to speed up optimization by tweaking
parameters like the merge factor.
> Date: Wed, 30
ve" index and new documents get added to the "inactive"
> copy. The two indexes get swapped every 4 hours (so that new documents are
> visible to the end user). Optimization is done before the inactive copy is
> made active.
>
>> Date: Wed, 30 Jul 2008 14:54:03
I have two copies (active/inactive) of the index. Searches are executed
against the "active" index and new documents get added to the "inactive" copy.
The two indexes get swapped every 4 hours (so that new documents are visible to
the end user). Optimization is done befo
ets fully
> optimized every 4 hours and the time it takes to fully optimize the index is
> longer than I like. Is there anything that I can do to speed up the
> optimization? I don't fully understand the different parameters (e.g. merge
> factor). If I decrease the merge fa
o speed up the
optimization? I don't fully understand the different parameters (e.g. merge
factor). If I decrease the merge factor, would it make the indexing slower
(which I'm OK with) but the optimization faster? Thank you.
> Date: Tue, 29 Jul 2008 08:32:46 +0200
> From: [EMAIL
John Griffin:
> Use IndexWriter.setRAMBufferSizeMB(double mb) and you won't have to
> sacrifice anything. It defaults to 16.0 MB so depending on the size of your
> index you may want to make it larger. Do some testing at various values to
> see where the sweet spot is.
>
Also, have a look at
htt
Try IndexWriter.optimize(int maxNumSegments)
On Mon, Jul 28, 2008 at 11:30 PM, Dragon Fly <[EMAIL PROTECTED]>wrote:
> I'd like to shorten the time it takes to optimize my index and am willing
> to sacrifice search and indexing performance. Which parameters (e.g. merge
> factor) should I change?
ragon Fly [mailto:[EMAIL PROTECTED]
Sent: Monday, July 28, 2008 12:00 PM
To: java-user@lucene.apache.org
Subject: Index optimization ...
I'd like to shorten the time it takes to optimize my index and am willing to
sacrifice search and indexing performance. Which parameters (e.g. merge
factor
I'd like to shorten the time it takes to optimize my index and am willing to
sacrifice search and indexing performance. Which parameters (e.g. merge
factor) should I change? Thank you.
_
Stay in touch when you're away with Windows
files in
it? What I do now is delete it manually. Is there by any chance that I can
delete it automatically? Any code that I can refer to?
--
View this message in context:
http://www.nabble.com/Index-Optimization-Issue-tp15996864p15996864.html
Sent from the Lucene - Java Users mailing list archi
ne.I am facing one problem.
> I have one very large index which is constantly getting
> update(add and delete) at a regular interval.after which I am
> optimizing the whole index (otherwise searches will be slow)
> but optimization takes time.So I was thinking to merge only the
&g
d and
> delete) at a regular interval.after which I am optimizing the whole
> index (otherwise searches will be slow) but optimization takes time.So I
> was thinking to merge only the segments of lesser size(I guess it will
> be a good compromise between search time and optimization
Hello,
I am very new to Lucene.I am facing one problem.
I have one very large index which is constantly getting update(add and delete)
at a regular interval.after which I am optimizing the whole index (otherwise
searches will be slow) but optimization takes time.So I was thinking to merge
1 - 100 of 150 matches
Mail list logo