[mailto:erickerick...@gmail.com]
Sent: Friday, June 03, 2011 4:45 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr performance tuning - disk i/o?
Quick impressions:
The faceting is usually best done on fields that don't have lots of
unique
values for three reasons:
1 It's questionable how
-user@lucene.apache.org
Subject: Re: Solr performance tuning - disk i/o?
Polling interval was in reference to slaves in a multi-machine
master/slave setup. so probably not
a concern just at present.
Warmup time of 0 is not particularly normal, I'm not quite sure what's
going
.
thanks,
Demian
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Monday, June 06, 2011 11:59 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr performance tuning - disk i/o?
Polling interval was in reference to slaves in a multi-machine
master/slave setup
Hello,
I'm trying to move a VuFind installation from an ailing physical server into a
virtualized environment, and I'm running into performance problems. VuFind is
a Solr 1.4.1-based application with fairly large and complex records (many
stored fields, many words per record). My particular
...@villanova.edu
To: solr-user@lucene.apache.org solr-user@lucene.apache.org
Sent: Fri, June 3, 2011 8:44:33 AM
Subject: Solr performance tuning - disk i/o?
Hello,
I'm trying to move a VuFind installation from an ailing physical server into
a
virtualized environment, and I'm running
This doesn't seem right. Here's a couple of things to try:
1 attach debugQuery=on to your long-running queries. The QTime returned
is the time taken to search, NOT including the time to load the
docs. That'll
help pinpoint whether the problem is the search itself, or assembling the
Erickson [mailto:erickerick...@gmail.com]
Sent: Friday, June 03, 2011 9:41 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr performance tuning - disk i/o?
This doesn't seem right. Here's a couple of things to try:
1 attach debugQuery=on to your long-running queries. The QTime
returned
Hi,
We migrated to Solr a few days back, but have now after going live we have
noticed a performance drop, especially when we do a delta index, which we
are executing every 1hours with around 100,000 records . We have a multi
core Solr server running on a Linux machine, with 4Gb given to the
search :: http://search-lucene.com/
- Original Message
From: Rohit ro...@in-rev.com
To: solr-user@lucene.apache.org
Sent: Fri, June 3, 2011 11:49:28 AM
Subject: Solr Performance
Hi,
We migrated to Solr a few days back, but have now after going live we have
noticed
://search-lucene.com/
- Original Message
From: Demian Katz demian.k...@villanova.edu
To: solr-user@lucene.apache.org solr-user@lucene.apache.org
Sent: Fri, June 3, 2011 11:21:52 AM
Subject: RE: Solr performance tuning - disk i/o?
Thanks to you and Otis for the suggestions! Some more
Subject: Re: Solr performance tuning - disk i/o?
This doesn't seem right. Here's a couple of things to try:
1 attach debugQuery=on to your long-running queries. The QTime
returned
is the time taken to search, NOT including the time to load the
docs. That'll
help pinpoint whether
want to search filterlist on keys (e.g.
fl=keys)? gram search is slowing things down extremely. Crazy clients want
to have minimum word =1, which is kind of insane but that's how it is.
Any idea?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-performance
this message in context:
http://lucene.472066.n3.nabble.com/Solr-performance-tp2926836p2935175.html
Sent from the Solr - User mailing list archive at Nabble.com.
filterlist on keys (e.g.
fl=keys)? gram search is slowing things down extremely. Crazy clients want
to have minimum word =1, which is kind of insane but that's how it is.
Any idea?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-performance-tp2926836p2935175.html
Sent
in context:
http://lucene.472066.n3.nabble.com/Solr-performance-tp2926836p2926836.html
Sent from the Solr - User mailing list archive at Nabble.com.
--- On Wed, 5/11/11, javaxmlsoapdev vika...@yahoo.com wrote:
From: javaxmlsoapdev vika...@yahoo.com
Subject: Solr performance
To: solr-user@lucene.apache.org
Date: Wednesday, May 11, 2011, 2:07 PM
I have some 25 odd fields with
stored=true in schema.xml. Retrieving back
5,000 records
On Wed, May 11, 2011 at 7:07 AM, javaxmlsoapdev vika...@yahoo.com wrote:
I have some 25 odd fields with stored=true in schema.xml. Retrieving back
5,000 records back takes a few secs. I also tried passing fl and only
include one field in the response but still response time is same. What are
Hello,
The problem turned out to be some sort of sharding/searching weirdness. We
modified some code in sharding but I don't think it is related. In any case,
we just added a new server that just shards (but doesn't do any searching /
doesn't contain any index) and performance is very very good.
Btw, I am monitoring output via jconsole with 8gb of ram and it still goes
to 8gb every 20 seconds or so,
gc runs, falls down to 1gb.
Hmm, jvm is eating 8Gb for 20 seconds - sounds a lot.
Do you return all results (ids) for your queries? Any tricky
faceting/sorting/function queries?
My solr+jetty+java6 install seems to work well with these GC options.
It's a dual processor environment:
-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
I've never had a real problem with memory, so I've not done any kind of
auditing. I probably should, but time is a limited resource.
CMS is very good for multicore CPU's. Use incremental mode only when you have
a single CPU with only one or two cores.
On Tuesday 15 March 2011 16:03:38 Shawn Heisey wrote:
My solr+jetty+java6 install seems to work well with these GC options.
It's a dual processor environment:
The host is dual quad-core, each Xen VM has been given two CPUs. Not
counting dom0, two of the hosts have 10/8 CPUs allocated, two of them
have 8/8. The dom0 VM is also allocated two CPUs.
I'm not really sure how that works out when it comes to Java running on
the VM, but if at all
Hello everyone,
First of all here is our Solr setup:
- Solr nightly build 986158
- Running solr inside the default jetty comes with solr build
- 1 write only Master , 4 read only Slaves (quad core 5640 with 24gb of RAM)
- Index replicated (on optimize) to slaves via Solr Replication
- Size of
Hi Doğacan,
Are you, at some point, running out of heap space? In my experience, that's
the common cause of increased load and excessivly high response times (or time
outs).
Cheers,
Hello everyone,
First of all here is our Solr setup:
- Solr nightly build 986158
- Running solr inside
Hello,
2011/3/14 Markus Jelsma markus.jel...@openindex.io
Hi Doğacan,
Are you, at some point, running out of heap space? In my experience, that's
the common cause of increased load and excessivly high response times (or
time
outs).
How much of a heap size would be enough? Our index size
Hello,
2011/3/14 Markus Jelsma markus.jel...@openindex.io
Hi Doğacan,
Are you, at some point, running out of heap space? In my experience,
that's the common cause of increased load and excessivly high response
times (or time
outs).
How much of a heap size would be enough? Our
I've definitely had cases in 1.4.1 where even though I didn't have an
OOM error, Solr was being weirdly slow, and increasing the JVM heap size
fixed it. I can't explain why it happened, or exactly how you'd know
this was going on, I didn't see anything odd in the logs to indicate, I
just
Hello again,
2011/3/14 Markus Jelsma markus.jel...@openindex.io
Hello,
2011/3/14 Markus Jelsma markus.jel...@openindex.io
Hi Doğacan,
Are you, at some point, running out of heap space? In my experience,
that's the common cause of increased load and excessivly high response
Nope, no OOM errors.
That's a good start!
Insanity count is 0 and fieldCAche has 12 entries. We do use some boosting
functions.
Btw, I am monitoring output via jconsole with 8gb of ram and it still goes
to 8gb every 20 seconds or so,
gc runs, falls down to 1gb.
Hmm, maybe the garbage
It's actually, as I understand it, expected JVM behavior to see the heap
rise to close to it's limit before it gets GC'd, that's how Java GC
works. Whether that should happen every 20 seconds or what, I don't nkow.
Another option is setting better JVM garbage collection arguments, so GC
You might also want to add the following switches for your GC log.
JAVA_OPTS=$JAVA_OPTS -verbose:gc -XX:+PrintGCTimeStamps
-XX:+PrintGCDetails - Xloggc:/var/log/tomcat6/gc.log
-XX:+PrintGCApplicationConcurrentTime
-XX:+PrintGCApplicationStoppedTime
Also, what JVM version are you using and
That depends on your GC settings and generation sizes. And, instead of
UseParallelGC you'd better use UseParNewGC in combination with CMS.
See 22: http://java.sun.com/docs/hotspot/gc1.4.2/faq.html
It's actually, as I understand it, expected JVM behavior to see the heap
rise to close to it's
Hello,
2011/3/14 Markus Jelsma markus.jel...@openindex.io
That depends on your GC settings and generation sizes. And, instead of
UseParallelGC you'd better use UseParNewGC in combination with CMS.
JConsole now shows a different profile output but load is still high and
performance is still
Mmm. SearchHander.handleRequestBody takes care of sharding. Could your system
suffer from http://wiki.apache.org/solr/DistributedSearch#Distributed_Deadlock
?
I'm not sure, i haven't seen a similar issue in a sharded environment,
probably because it was a controlled environment.
Hello,
2011/3/14 Markus Jelsma markus.jel...@openindex.io
Mmm. SearchHander.handleRequestBody takes care of sharding. Could your
system
suffer from
http://wiki.apache.org/solr/DistributedSearch#Distributed_Deadlock
?
We increased thread limit (which was 1 before) but it did not help.
Anyway,
this message in context:
http://lucene.472066.n3.nabble.com/Improving-Solr-performance-tp2210843p2254121.html
Sent from the Solr - User mailing list archive at Nabble.com.
:8983/solr/select/?q=my_query2
Please pay attention to the meaning of the -n parameter (there
is a slight gotcha there). man ab for details on usage, or see,
http://www.derivante.com/2009/05/05/solr-performance-benchmarks-single-vs-multi-core-index-shards/
for example.
In the last post, I wrote
in advance (and also thanks for previous comments)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Improving-Solr-performance-tp2210843p2249108.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Thu, Jan 13, 2011 at 10:10 PM, supersoft elarab...@gmail.com wrote:
On the one hand, I found really interesting those comments about the reasons
for sharding. Documentation agrees you about why to split an index in
several shards (big sizes problems) but I don't find any explanation about
I see from your other messages that these indexes all live on the same
machine.
You're almost certainly I/O bound, because you don't have enough memory for
the
OS to cache your index files. With 100GB of total index size, you'll get best
results with between 64GB and 128GB of total RAM.
No, it also depends on the queries you execute (sorting is a big consumer) and
the number of concurrent users.
Is that a general rule of thumb? That it is best to have about the
same amount of RAM as the size of your index?
So, with a 5GB index, I should have between 4GB and 8GB of RAM
I see a lot of people using shards to hold different types of
documents, and it almost always seems to be a bad solution. Shards are
intended for distributing a large index over multiple hosts -- that's
it. Not for some kind of federated search over multiple schemas, not
for access control.
On Mon, 2011-01-10 at 21:43 +0100, Paul wrote:
I see from your other messages that these indexes all live on the same
machine.
You're almost certainly I/O bound, because you don't have enough memory for
the
OS to cache your index files. With 100GB of total index size, you'll get
Sent: Mon, January 10, 2011 1:08:00 PM
Subject: Re: Improving Solr performance
I see a lot of people using shards to hold different types of documents, and
it almost always seems to be a bad solution. Shards are intended for
distributing a large index over multiple hosts -- that's
Not sure if this was mentioned yet, but if you are doing slave/master
replication you'll need 2x the RAM at replication time. Just something to
keep in mind.
-mike
On Mon, Jan 10, 2011 at 5:01 PM, Toke Eskildsen t...@statsbiblioteket.dkwrote:
On Mon, 2011-01-10 at 21:43 +0100, Paul wrote:
I
On 1/10/2011 5:03 PM, Dennis Gearon wrote:
What I seem to see suggested here is to use different cores for the things you
suggested:
different types of documents
Access Control Lists
I wonder how sharding would work in that scenario?
Sharding has nothing to do with that scenario at all.
And I don't think I've seen anyone suggest a seperate core just for
Access Control Lists. I'm not sure what that would get you.
Perhaps a separate store that isn't Solr at all, in some cases.
On 1/10/2011 5:36 PM, Jonathan Rochkind wrote:
Access Control Lists
Any sources to cite for this statement? And are you talking about RAM
allocated to the JVM or available for OS cache?
Not sure if this was mentioned yet, but if you are doing slave/master
replication you'll need 2x the RAM at replication time. Just something to
keep in mind.
-mike
On
On 1/7/2011 2:57 AM, supersoft wrote:
have deployed a 5-sharded infrastructure where: shard1 has 3124422 docs
shard2 has 920414 docs shard3 has 602772 docs shard4 has 2083492 docs shard5
has 11915639 docs Indexes total size: 100GB
The OS is Linux x86_64 (Fedora release 8) with vMem equal to
@lucene.apache.org
Sent: Sun, January 9, 2011 4:34:08 PM
Subject: Re: Improving Solr performance
On 1/7/2011 2:57 AM, supersoft wrote:
have deployed a 5-sharded infrastructure where: shard1 has 3124422 docs
shard2 has 920414 docs shard3 has 602772 docs shard4 has 2083492 docs shard5
has 11915639 docs Indexes
Are you using the Solr caches? These are configured in solrconfig.xml
in each core. Make sure you have at least 50-100 configured for each
kind.
Also, use filter queries: a filter query describes a subset of
documents. When you do a bunch of queries against the same filter
query, the second and
me an approach of how I
should tune the instance for not being so hardly dependent of the number of
simultaneous queries?
Thanks in advance
--
View this message in context:
http://lucene.472066.n3.nabble.com/Improving-Solr-performance-tp2210843p2210843.html
Sent from the Solr - User mailing list
me an approach of how I
should tune the instance for not being so hardly dependent of the number of
simultaneous queries?
Thanks in advance
--
View this message in context:
http://lucene.472066.n3.nabble.com/Improving-Solr-performance-tp2210842p2210842.html
Sent from the Solr - User mailing list
Some questions-
1-Are all shards on same machine
2-What is your Ram Size
3-What are the size of index on each shards in GB
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Improving-Solr-performance-tp2210843p2210878.html
Sent from the Solr - User mailing list
1 - Yes, all the shards are in the same machine
2 - The machine RAM is 7.8GB and I assign 3.4GB to Solr server
3 - The shards sizes (GB) are 17, 5, 3, 11, 64
--
View this message in context:
http://lucene.472066.n3.nabble.com/Improving-Solr-performance-tp2210843p2211135.html
Sent from the Solr
for response from all shards and incorporate all
responses in a single result and returns.
So if any of shards taking more time to response then your total response
time will affect
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Improving-Solr-performance
for response from all shards and incorporate all
responses in a single result and returns.
So if any of shards taking more time to response then your total response
time will affect
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Improving-Solr-performance
, Nb_indexed_fields_in_index,
...) ?
Regards,
---
Hong-Thai
-Message d'origine-
De : Grijesh.singh [mailto:pintu.grij...@gmail.com]
Envoyé : vendredi 7 janvier 2011 12:29
À : solr-user@lucene.apache.org
Objet : Re: Improving Solr performance
shards are used when index size become huge
open a new mail conversation for that
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Improving-Solr-performance-tp2210843p2211300.html
Sent from the Solr - User mailing list archive at Nabble.com.
.nabble.com/Improving-Solr-performance-tp2210843p2211305.html
Sent from the Solr - User mailing list archive at Nabble.com.
.472066.n3.nabble.com/Improving-Solr-performance-tp2210843p2210843.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Fri, 2011-01-07 at 10:57 +0100, supersoft wrote:
[5 shards, 100GB, ~20M documents]
...
[Low performance for concurrent searches]
Using JConsole for monitoring the server java proccess I checked that Heap
Memory and the CPU Usages don't reach the upper limits so the server
shouldn't
Making sure the index can fit in memory (you don't have to allocate that
much to Solr, just make sure it's available to the OS so it can cache it --
otherwise you are paging the hard drive, which is why you are probably IO
bound) has been the key to our performance. We recently opted to use less
last week we put our solr in production. it was a very smooth start. solr
really works great and without any problems so far.
its a huge improvement over our old intranet search
i wonder however whether we can increase the search performance of our solr
installation, just to make the search
,
Regards,
--
- Siddhant
--
- Siddhant
--
View this message in context:
http://old.nabble.com/Solr-Performance-Issues-tp27864278p27868456.html
Sent from the Solr - User mailing list archive
queries
per second with the hardware mentioned above.
Thanks,
Regards,
--
- Siddhant
--
- Siddhant
--
View this message in context:
http://old.nabble.com/Solr-Performance-Issues-tp27864278p27868456.html
Sent from the Solr - User mailing list archive
queries
per second with the hardware mentioned above.
Thanks,
Regards,
--
- Siddhant
--
- Siddhant
--
View this message in context:
http://old.nabble.com/Solr-Performance-Issues-tp27864278p27868456.html
Sent from the Solr - User mailing
--
View this message in context:
http://old.nabble.com/Solr-Performance-Issues-tp27864278p27868456.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
- Siddhant
--
- Siddhant
Hi everyone,
I have an index corresponding to ~2.5 million documents. The index size is
43GB. The configuration of the machine which is running Solr is - Dual
Processor Quad Core Xeon 5430 - 2.66GHz (Harpertown) - 2 x 12MB cache, 8GB
RAM, and 250 GB HDD.
I'm observing a strange trend in the
How many outstanding queries do you have at a time? Is it possible
that when you start, you have only a few queries executing concurrently
but as your test runs you have hundreds?
This really is a question of how your load test is structured. You might
get a better sense of how it works if your
Hi Erick,
The way the load test works is that it picks up 5000 queries, splits them
according to the number of threads (so if we have 10 threads, it schedules
10 threads - each one sending 500 queries). So it might be possible that the
number of queries at a point later in time is greater than
--
View this message in context:
http://old.nabble.com/Solr-Performance-Issues-tp27864278p27872139.html
Sent from the Solr - User mailing list archive at Nabble.com.
I was lucky to contribute an excellent solution:
http://issues.apache.org/jira/browse/LUCENE-2230
Even 2nd edition of Lucene in Action advocates to use fuzzy search only in
exceptional cases.
Another solution would be 2-step indexing (it may work for many use cases),
but it is not spellchecker
http://issues.apache.org/jira/browse/LUCENE-2230
Enjoy!
-Original Message-
From: Fuad Efendi [mailto:f...@efendi.ca]
Sent: January-19-10 11:32 PM
To: solr-user@lucene.apache.org
Subject: SOLR Performance Tuning: Fuzzy Searches, Distance, BK-Tree
Hi,
I am wondering: will SOLR
Hi,
I am wondering: will SOLR or Lucene use caches for fuzzy searches? I mean
per-term caching or something, internal to Lucene, or may be SOLR (SOLR may
use own query parser)...
Anyway, I implemented BK-Tree and playing with it right now, I altered
FuzzyTermEnum class of Lucene...
peter.wola...@acquia.com
To: solr-user@lucene.apache.org
Sent: Sun, January 3, 2010 3:37:01 PM
Subject: Re: SOLR Performance Tuning: Pagination
At the NOVA Apache Lucene/Solr Meetup last May, one of the speakers
from Near Infinity (Aaron McCurry I think) mentioned that he had a
patch for lucene
:37:01 PM
Subject: Re: SOLR Performance Tuning: Pagination
At the NOVA Apache Lucene/Solr Meetup last May, one of the speakers
from Near Infinity (Aaron McCurry I think) mentioned that he had a
patch for lucene that enabled unlimited depth memory-efficient paging.
Is anyone in contact with him
Si si, that issue.
Otis
--
Sematext -- http://sematext.com/ -- Solr - Lucene - Nutch
- Original Message
From: Peter Wolanin peter.wola...@acquia.com
To: solr-user@lucene.apache.org
Sent: Thu, January 7, 2010 9:27:04 PM
Subject: Re: SOLR Performance Tuning: Pagination
Great
At the NOVA Apache Lucene/Solr Meetup last May, one of the speakers
from Near Infinity (Aaron McCurry I think) mentioned that he had a
patch for lucene that enabled unlimited depth memory-efficient paging.
Is anyone in contact with him?
-Peter
On Thu, Dec 24, 2009 at 11:27 AM, Grant Ingersoll
On Dec 24, 2009, at 1:51 PM, Walter Underwood wrote:
Some bots will do that, too. Maybe badly written ones, but we saw that at
Netflix. It was causing search timeouts just before a peak traffic period, so
we set a page limit in the front end, something like 200 pages.
It makes sense for
I used pagination for a while till found this...
I have filtered query ID:[* TO *] returning 20 millions results (no
faceting), and pagination always seemed to be fast. However, fast only with
low values for start=12345. Queries like start=28838540 take 40-60 seconds,
and even cause
On Dec 24, 2009, at 11:09 AM, Fuad Efendi wrote:
I used pagination for a while till found this...
I have filtered query ID:[* TO *] returning 20 millions results (no
faceting), and pagination always seemed to be fast. However, fast only with
low values for start=12345. Queries like
When do users do a query like that? --wunder
On Dec 24, 2009, at 8:09 AM, Fuad Efendi wrote:
I used pagination for a while till found this...
I have filtered query ID:[* TO *] returning 20 millions results (no
faceting), and pagination always seemed to be fast. However, fast only with
fwiw, when implementing distributed search i ran into a similar
problem, but then i noticed even google doesnt let you go past page
1000, easier to just set a limit on start
On Thu, Dec 24, 2009 at 8:36 AM, Walter Underwood wun...@wunderwood.org wrote:
When do users do a query like that?
On Dec 24, 2009, at 11:36 AM, Walter Underwood wrote:
When do users do a query like that? --wunder
Well, SolrEntityProcessor users do :)
http://issues.apache.org/jira/browse/SOLR-1499
(which by the way I plan on polishing and committing over the
holidays)
Erik
On Dec 24,
@lucene.apache.org
Subject: Re: SOLR Performance Tuning: Pagination
Some bots will do that, too. Maybe badly written ones, but we saw that at
Netflix. It was causing search timeouts just before a peak traffic period,
so we set a page limit in the front end, something like 200 pages.
It makes sense
to the relevance before sorting.
[It also made me jump through hoops when I wrote some unit tests for the
indexing.]
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: December-24-09 1:51 PM
To: solr-user@lucene.apache.org
Subject: Re: SOLR Performance Tuning
.
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: December-24-09 11:37 AM
To: solr-user@lucene.apache.org
Subject: Re: SOLR Performance Tuning: Pagination
When do users do a query like that? --wunder
On Dec 24, 2009, at 8:09 AM, Fuad Efendi wrote
huge nuber of documents
(better is to tune stop-word list)
-Fuad
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: December-24-09 1:51 PM
To: solr-user@lucene.apache.org
Subject: Re: SOLR Performance Tuning: Pagination
Some bots will do that, too
Hi
Can you quickly explain what you did to disable INFO-Level?
I am from a PHP background and am not so well versed in Tomcat or
Java. Is this a section in solrconfig.xml or did you have to edit
Solr Java source and recompile?
Thanks In Advance
Andrew
2009/12/20 Fuad Efendi f...@efendi.ca:
Can you quickly explain what you did to disable INFO-Level?
I am from a PHP background and am not so well versed in Tomcat or
Java. Is this a section in solrconfig.xml or did you have to edit
Solr Java source and recompile?
1. Create a file called logging.properties with following content
After researching how to configure default SOLR Tomcat logging, I finally
disabled INFO-level for SOLR.
And performance improved at least 7 times!!! ('at least 7' because I
restarted server 5 minutes ago; caches are not prepopulated yet)
Before that, I had 300-600 ms in HTTPD log files in
: December-20-09 2:54 PM
To: solr-user@lucene.apache.org
Subject: SOLR Performance Tuning: Disable INFO Logging.
After researching how to configure default SOLR Tomcat logging, I
finally
disabled INFO-level for SOLR.
And performance improved at least 7 times!!! ('at least 7' because I
On Mon, Apr 27, 2009 at 10:27 PM, Jon Bodner jbod...@blackboard.com wrote:
Trying to point multiple Solrs on multiple boxes at a single shared
directory is almost certainly doomed to failure; the read-only Solrs won't
know when the read/write Solr instance has updated the index.
I'm
: Tuesday, April 28, 2009 4:57:54 AM
Subject: Re: Solr Performance bottleneck
On Mon, Apr 27, 2009 at 10:27 PM, Jon Bodner wrote:
Trying to point multiple Solrs on multiple boxes at a single shared
directory is almost certainly doomed to failure; the read-only Solrs won't
know when the read
On Tue, Apr 28, 2009 at 3:18 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
Hi,
You should probably just look at the index version number to figure out if
the name changed. If you are looking at segments.gen, you are looking at a
file that may not exist in Lucene in the future.
--
View this message in context:
http://www.nabble.com/Solr-Performance-bottleneck-tp23209595p23262198.html
Sent from the Solr - User mailing list archive at Nabble.com.
This isn't a new problem, NFS was 100X slower than local disk for me
with Solr 1.1.
Backing up indexes is very tricky. You need to do it while the are
not being updated, or you'll get a corrupt copy. If your indexes
aren't large, you are probably better off backing up the source
documents and
ready for production
use)?
Any answers would be greatly appreciated.
Thanks,
Jon
--
View this message in context:
http://www.nabble.com/Solr-Performance-bottleneck-tp23209595p23209595.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Grant
assigned to each Solr instance.
Has anyone else seen a problem like this before? Can anyone suggest any
solutions? Will Solr 1.4 help (and is Solr 1.4 ready for production use)?
Any answers would be greatly appreciated.
Thanks,
Jon
--
View this message in context:
http://www.nabble.com/Solr
301 - 400 of 464 matches
Mail list logo