Hi,
Are you using the compound file format? If yes, then, have u set it properly
in solrconfig.xml, if not, then, change to:
useCompoundFiletrue/useCompoundFile (this is by default 'false') under
the tags:
indexDefaults.../indexDefaults
and, mainIndex.../mainIndex
Aleksander
Hi,
Just wanted to know, Is the DataImportHandler available in solr1.3
thread-safe?. I would like to use multiple instances of data import handler
running concurrently and posting my various set of data from DB to Index.
Can I do this by registering the DIH multiple times with various names in
We are about to release Field collapsing in our production site, but the
index size is not as big as yours.
Definitely collapsing is an added overhead. You can do some load testing and
bench mark on some dataset as you would expect on your production project as
SOLR-236 is currently available
Just use the query analysis link with appropriate values. It will show how
each filter factories and analyzers breaks the terms during various analysis
levels. Specially check EnglishPorterFilterFactory analysis
Jeff Newburn wrote:
I am trying to figure out how the synonym filter processes
I have been reading the SOLR 1.3 wiki, which says that to fetch documents
from each cores in a multi-cores setup we need to request each core
independently.
What i was under impression that SOLR multi-core feature might be using
lucene's multisearcher to search among multiple cores.
Anyone with
@lucene.apache.org
Subject: Re: SOLR OOM (out of memory) problem
On 21-May-08, at 4:46 AM, gurudev wrote:
Just to add more:
The JVM heap allocated is 6GB with initial heap size as 2GB. We use
quadro(which is 8 cpus) on linux servers for SOLR slaves.
We use facet searches, sorting.
document cache
Hi
We currently host index of size approx 12GB on 5 SOLR slaves machines, which
are load balanced under cluster. At some point of time, which is after 8-10
hours, some SOLR slave would give Out of memory error, after which it just
stops responding, which then requires restart and after restart
Just to add more:
The JVM heap allocated is 6GB with initial heap size as 2GB. We use
quadro(which is 8 cpus) on linux servers for SOLR slaves.
We use facet searches, sorting.
document cache is set to 7 million (which is total documents in index)
filtercache 1
gurudev wrote:
Hi
We