On 7/31/2013 12:24 AM, William Bell wrote:
But that link does not tell me which on you are using?
You are listing like 4 versions on your site.
Also, what did it fix? Pause times?
Any other words of wisdom ?
I'm not sure whether that was directed at me or Roman, but here's my
answers:
Hi Dmitry,
probably mistake in the readme, try calling it with -q
/home/dmitry/projects/lab/solrjmeter/queries/demo/demo.queries
as for the base_url, i was testing it on solr4.0, where it tries contactin
/solr/admin/system - is it different for 4.3? I guess I should make it
configurable (it
I'll try to run it with the new parameters and let you know how it goes.
I've rechecked details for the G1 (default) garbage collector run and I can
confirm that 2 out of 3 runs were showing high max response times, in some
cases even 10secs, but the customized G1 never - so definitely the
is obviously higher with G1GC.
-Original message-
From:Roman Chyla roman.ch...@gmail.com
Sent: Wednesday 31st July 2013 18:32
To: solr-user@lucene.apache.org
Subject: Re: Measuring SOLR performance
I'll try to run it with the new parameters and let you know how it goes.
I've
with G1GC.
-Original message-
From:Roman Chyla roman.ch...@gmail.com
Sent: Wednesday 31st July 2013 18:32
To: solr-user@lucene.apache.org
Subject: Re: Measuring SOLR performance
I'll try to run it with the new parameters and let you know how it goes.
I've rechecked details
Hello,
I have been wanting some tools for measuring performance of SOLR, similar
to Mike McCandles' lucene benchmark.
so yet another monitor was born, is described here:
http://29min.wordpress.com/2013/07/31/measuring-solr-query-performance/
I tested it on the problem of garbage collectors (see
On 7/30/2013 6:59 PM, Roman Chyla wrote:
I have been wanting some tools for measuring performance of SOLR, similar
to Mike McCandles' lucene benchmark.
so yet another monitor was born, is described here:
http://29min.wordpress.com/2013/07/31/measuring-solr-query-performance/
I tested it
Hi,
We updated to version 4.3.0 from 4.2.1 and we have some performance
problem with the sorting.
A query that returns 1 hits has a query time more than 100ms (can be
more than 1s) against less than 10ms for the same query without the
sort parameter:
query with sorting option:
Ariel,
I just went up against a similar issue with upgrading from 3.6.1 to 4.3.0.
In my case, my solrconfig.xml for 4.3.0 (which was based on my 3.6.1 file)
did not provide a newSearcher or firstSearcher warming query. After adding
a query to each listener, my query speeds drastically
We have a solr core with about 115 million documents. We are trying to
migrate data and running a simple query with *:* query and with start
and rows param.
The performance is becoming too slow in solr, its taking almost 2 mins
to get 4000 rows and migration is being just too slow. Logs snippet
Hi,
How many shards do you have? This is a known issue with deep paging with multi
shard, see https://issues.apache.org/jira/browse/SOLR-1726
You may be more successful in going to each shard, one at a time (with
distrib=false) to avoid this issue.
--
Jan Høydahl, search solution architect
Jan,
Would the same distrib=false help for distributed faceting? We are running
into a similar issue with facet paging.
Dmitry
On Mon, Apr 29, 2013 at 11:58 AM, Jan Høydahl jan@cominvent.com wrote:
Hi,
How many shards do you have? This is a known issue with deep paging with
multi
We have a single shard, and all the data is in a single box only.
Definitely looks like deep-paging is having problems.
Just to understand, is the searcher looping over the result set
everytime and skipping the first start count? This will definitely
take a toll when we reach higher start
Abhishek,
There is a wiki regarding this:
http://wiki.apache.org/solr/CommonQueryParameters
search pageDoc and pageScore.
On Mon, Apr 29, 2013 at 1:17 PM, Abhishek Sanoujam
abhi.sanou...@gmail.comwrote:
We have a single shard, and all the data is in a single box only.
Definitely looks like
We've found that you can do a lot for yourself by using a filter query
to page through your data if it has a natural range to do so instead
of start and rows.
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
I guess so, you'd have to use a filter query to page through the set
of documents you were faceting against and sum them all at the end.
It's not quite the same operation as paging through results, because
facets are aggregate statistics, but if you're willing to go through
the trouble, I bet it
Thanks.
Only question is how to smoothly transition to this model. Our facet
(string) fields contain timestamp prefixes, that are reverse ordered
starting from the freshest value. In theory, we could try computing the
filter queries for those. But before doing so, we would need the matched
ids
i shift machine to m1.large for 250 data or for 500??
or it will work for now ??
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-Performance-question-tp4041245.html
Sent from the Solr - User mailing list archive at Nabble.com.
...@gmail.com]
Sent: Tuesday, February 19, 2013 1:46 PM
To: solr-user@lucene.apache.org
Subject: SOLR Performance question
Hi everybody.
I stored 42 field in solr.
and indexed 34 field.
and going to store 4-6 coloum more and indexed 3-5
and total doc i have stored --- 250
and may
Hi users,
Could you please help us on tuning the solr search performance. we have tried
to do some PT on solr instance with 8GB RAM and 50,000 record in index. and we
got 33 concurrent usr hitting the instance with on avg of 17.5 hits per second
with response time 2 seconds. as it is very high
Hello,
What's you OS/cpu? is it a VM or real hardware? which jvm do you run? with
which parameters? have you checked GC log? what's the index size? what's a
typical query parameters? what's an average number of results in the
query? have you tried to run query with debugQuery=true during hard
On Mon, Oct 29, 2012 at 7:04 AM, Shawn Heisey s...@elyograg.org wrote:
They are indeed Java options. The first two control the maximum and
starting heap sizes. NewRatio controls the relative size of the young and
old generations, making the young generation considerably larger than it is
by
On Fri, Oct 26, 2012 at 11:04 PM, Shawn Heisey s...@elyograg.org wrote:
Warming doesn't seem to be a problem here -- all your warm times are zero,
so I am going to take a guess that it may be a heap/GC issue. I would
recommend starting with the following additional arguments to your JVM.
On 10/28/2012 2:28 PM, Dotan Cohen wrote:
On Fri, Oct 26, 2012 at 11:04 PM, Shawn Heisey s...@elyograg.org wrote:
Warming doesn't seem to be a problem here -- all your warm times are zero,
so I am going to take a guess that it may be a heap/GC issue. I would
recommend starting with the
On Wed, Oct 24, 2012 at 4:33 PM, Walter Underwood wun...@wunderwood.org wrote:
Please consider never running optimize. That should be called force merge.
Thanks. I have been letting the system run for about two days already
without an optimize. I will let it run a week, then merge to see the
I spoke too soon! Wereas three days ago when the index was new 500
records could be written to it in 3 seconds, now that operation is
taking a minute and a half, sometimes longer. I ran optimize() but
that did not help the writes. What can I do to improve the write
performance?
Even opening the
On 10/26/2012 7:16 AM, Dotan Cohen wrote:
I spoke too soon! Wereas three days ago when the index was new 500
records could be written to it in 3 seconds, now that operation is
taking a minute and a half, sometimes longer. I ran optimize() but
that did not help the writes. What can I do to
On Fri, Oct 26, 2012 at 4:02 PM, Shawn Heisey s...@elyograg.org wrote:
Taking all the information I've seen so far, my bet is on either cache
warming or heap/GC trouble as the source of your problem. It's now specific
information gathering time. Can you gather all the following information
On 10/26/2012 9:41 AM, Dotan Cohen wrote:
On the dashboard of the GUI, it lists all the jvm arguments. Include those.
Click Java Properties and gather the java.runtime.version and
java.specification.vendor information.
After one of the long update times, pause/stop your indexing application.
On Tue, Oct 23, 2012 at 3:07 PM, Erick Erickson erickerick...@gmail.com wrote:
Maybe you've been looking at it but one thing that I didn't see on a fast
scan was that maybe the commit bit is the problem. When you commit,
eventually the segments will be merged and a new searcher will be opened
Please consider never running optimize. That should be called force merge.
wunder
On Oct 24, 2012, at 3:28 AM, Dotan Cohen wrote:
On Tue, Oct 23, 2012 at 3:07 PM, Erick Erickson erickerick...@gmail.com
wrote:
Maybe you've been looking at it but one thing that I didn't see on a fast
scan
Maybe you've been looking at it but one thing that I didn't see on a fast
scan was that maybe the commit bit is the problem. When you commit,
eventually the segments will be merged and a new searcher will be opened
(this is true even if you're NOT optimizing). So you're effectively committing
When Solr is slow, I'm seeing these in the logs:
[collection1] Error opening new searcher. exceeded limit of
maxWarmingSearchers=2, try again later.
[collection1] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
Googling, I found this in the FAQ:
Typically the way to avoid this error is to
Hello!
You can check if the long warming is causing the overlapping
searchers. Check Solr admin panel and look at cache statistics, there
should be warmupTime property.
Lowering the autowarmCount should lower the time needed to warm up,
howere you can also look at your warming queries (if you
Are you using Solr 3X? The occasional long commit should no longer
show up in Solr 4.
- Mark
On Mon, Oct 22, 2012 at 10:44 AM, Dotan Cohen dotanco...@gmail.com wrote:
I've got a script writing ~50 documents to Solr at a time, then
commiting. Each of these documents is no longer than 1 KiB of
On Mon, Oct 22, 2012 at 5:02 PM, Rafał Kuć r@solr.pl wrote:
Hello!
You can check if the long warming is causing the overlapping
searchers. Check Solr admin panel and look at cache statistics, there
should be warmupTime property.
Thank you, I have gone over the Solr admin panel twice and
On Mon, Oct 22, 2012 at 5:27 PM, Mark Miller markrmil...@gmail.com wrote:
Are you using Solr 3X? The occasional long commit should no longer
show up in Solr 4.
Thank you Mark. In fact, this is the production release of Solr 4.
--
Dotan Cohen
http://gibberish.co.il
http://what-is-what.com
On 10/22/2012 9:58 AM, Dotan Cohen wrote:
Thank you, I have gone over the Solr admin panel twice and I cannot
find the cache statistics. Where are they?
If you are running Solr4, you can see individual cache autowarming times
here, assuming your core is named collection1:
On Mon, Oct 22, 2012 at 7:29 PM, Shawn Heisey s...@elyograg.org wrote:
On 10/22/2012 9:58 AM, Dotan Cohen wrote:
Thank you, I have gone over the Solr admin panel twice and I cannot find
the cache statistics. Where are they?
If you are running Solr4, you can see individual cache autowarming
Perhaps you can grab a snapshot of the stack traces when the 60 second
delay is occurring?
You can get the stack traces right in the admin ui, or you can use
another tool (jconsole, visualvm, jstack cmd line, etc)
- Mark
On Mon, Oct 22, 2012 at 1:47 PM, Dotan Cohen dotanco...@gmail.com wrote:
On Mon, Oct 22, 2012 at 9:22 PM, Mark Miller markrmil...@gmail.com wrote:
Perhaps you can grab a snapshot of the stack traces when the 60 second
delay is occurring?
You can get the stack traces right in the admin ui, or you can use
another tool (jconsole, visualvm, jstack cmd line, etc)
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the cause of your problem.
Second, the OS will use the extra memory for file buffers, which really helps
performance, so you might
Has the Solr team considered renaming the optimize function to avoid
leading people down the path of this antipattern?
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a
Lucene already did that:
https://issues.apache.org/jira/browse/LUCENE-3454
Here is the Solr issue:
https://issues.apache.org/jira/browse/SOLR-3141
People over-use this regardless of the name. In Ultraseek Server, it was called
force merge and we had to tell people to stop doing that nearly
On Mon, Oct 22, 2012 at 4:39 PM, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
Has the Solr team considered renaming the optimize function to avoid
leading people down the path of this antipattern?
If it were never the right thing to do, it could simply be removed.
The problem is
On Mon, Oct 22, 2012 at 10:01 PM, Walter Underwood
wun...@wunderwood.org wrote:
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the cause of your problem.
Thanks. Looking at
On Mon, Oct 22, 2012 at 10:44 PM, Walter Underwood
wun...@wunderwood.org wrote:
Lucene already did that:
https://issues.apache.org/jira/browse/LUCENE-3454
Here is the Solr issue:
https://issues.apache.org/jira/browse/SOLR-3141
People over-use this regardless of the name. In Ultraseek
On 10/22/2012 3:11 PM, Dotan Cohen wrote:
On Mon, Oct 22, 2012 at 10:01 PM, Walter Underwood
wun...@wunderwood.org wrote:
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the
On Tue, Oct 23, 2012 at 3:52 AM, Shawn Heisey s...@elyograg.org wrote:
As soon as you make any change at all to an index, it's no longer
optimized. Delete one document, add one document, anything. Most of the
time you will not see a performance increase from optimizing an index that
consists
Hi,
I have an index of about 50m documents. the fields in this index are
basically hierarchical tokens: token1, token2 token10
When searching the index, I start by getting a list of the query tokens
(1..10) and then requesting the documents that suit those query tokens.
I always want about
Jack ,its not from Chris.
--Surendra
Hi Otis,
done :) Till now we use Graphite, Ganglia and Zabbix. For our JVM
monitoring JStatsD.
Best regards
Vadim
2012/5/31 Otis Gospodnetic otis_gospodne...@yahoo.com:
Hi,
Super quick poll: What do you use for Solr performance monitoring?
Vote here:
http://blog.sematext.com/2012/05/30
Hi,
Super quick poll: What do you use for Solr performance monitoring?
Vote here:
http://blog.sematext.com/2012/05/30/poll-what-do-you-use-for-solr-performance-monitoring/
I'm collecting data for my Berlin Buzzwords talk that will touch on Solr, so
your votes will be greatly appreciated
Jack Krupansky jack at basetechnology.com writes:
I vaguely recall some thread blocking issue with trying to parse too many
PDF files at one time in the same JVM.
Occasionally Tika (actually PDFBox) has been known to hang for some PDF
docs.
Do you have enough memory in the JVM? When
?Y2hyaXMuYS5tYXR0bWFubkBqcGwubmFzYS5nb3Y=?=
csnsha...@gmail.com
Very strange.
-- Jack Krupansky
-Original Message-
From: chris.a.mattm...@jpl.nasa.gov
Sent: Friday, May 25, 2012 7:08 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr Performance
Jack Krupansky jack
in the JVM? Maybe garbage collection is taking too much of
the CPU.
-- Jack Krupansky
-Original Message-
From: chris.a.mattm...@jpl.nasa.gov
Sent: Thursday, May 24, 2012 9:55 AM
To: solr-user@lucene.apache.org
Subject: Solr Performance
Hi Chris
First of all,thanks lot that your earlier inputs
Afternoon,
We are testing an updated version of our Solr server running solr 3.5.0 and we
are experiencing some performance issues with regard to updates and commits.
Searches are working well.
There are approximately 80,000 documents and the index is about 2.5 GB. This
does not seem to be
: Wednesday, May 9, 2012 9:01 AM
Subject: Solr Performance
Afternoon,
We are testing an updated version of our Solr server running solr 3.5.0 and we
are experiencing some performance issues with regard to updates and commits.
Searches are working well.
There are approximately 80,000 documents
On 5/9/2012 7:01 AM, richard.pog...@holidaylettings.co.uk wrote:
We are testing an updated version of our Solr server running solr 3.5.0 and we
are experiencing some performance issues with regard to updates and commits.
Searches are working well.
There are approximately 80,000 documents and
Another option is to remove autowarming, and instead create a small
bunch of queries that go most of the way. If you sort on a field, do
that sort; facet queries also. This will load the basic Lucene data
structures. Also, just getting the index data loaded into the OS disk
cache helps a lot.
On
lucenerevolution.com - Lucene/Solr Open Source Search Conference.
Boston May 7-10
--
If you reply to this email, your message will be added to the discussion
below:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p318.html
-16build_lazyfieldloading_true.txt
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3780995.html
Sent from the Solr - User mailing list archive at Nabble.com.
!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3773310.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
If you reply to this email, your message will be added
On Sun, Feb 26, 2012 at 3:32 PM, Erick Erickson erickerick...@gmail.com wrote:
Would you hypothesize that lazy field loading could be that much
slower if a large fraction of fields were selected?
If you actually use the lazy field later, it will cause an extra read
for each field.
If you don't
with
Solr, does that apply to wildcards used in the fl list?
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3769995.html
Sent from the Solr - User mailing list archive at Nabble.com
On Fri, Feb 24, 2012 at 10:25 AM, naptowndev naptowndev...@gmail.com wrote:
Our current config for that is as follows:
documentCache class=*solr.LRUCache* size=*15000*
initialSize=*15000*autowarmCount
=*0* /
It's the same for both instances
I assume the asterisks are for emphasis and are
) fastLRU on the
documentcache?
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3773015.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Fri, Feb 24, 2012 at 11:24 AM, naptowndev naptowndev...@gmail.com wrote:
Another question I have is regarding solr.LRUCache vs. solr.FastLRUCache.
Would there be reason to implement (or not implement) fastLRU on the
documentcache?
LRUCache can be faster if the hit rate is really low (i.e.
?
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3769995.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
If you reply to this email
were comparing
against - so I need to do that too)
Please also let me know if you have any further suggestions.
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3773310.html
Sent from the Solr - User mailing
.
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3773310.html
Sent from the Solr - User mailing list archive at Nabble.com.
!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3773310.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
If you reply to this email, your message will be added
this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3773310.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
If you reply to this email, your message will be added
the newer versions are performing a bit
slower?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3767725.html
Sent from the Solr - User mailing list archive at Nabble.com.
- but that should
give you an idea of how we are using wildcards.
I'm not sure about the maxBooleanClauses...not being all that familiar with
Solr, does that apply to wildcards used in the fl list?
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement
of how we are using wildcards.
I'm not sure about the maxBooleanClauses...not being all that familiar with
Solr, does that apply to wildcards used in the fl list?
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help
all that familiar
with
Solr, does that apply to wildcards used in the fl list?
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3769995.html
Sent from the Solr - User mailing list archive
:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3769995.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
If you reply to this email, your message will be added to the discussion
below:
http
boost from fast vector
highlighting, but also the decreased payload size.
Thanks in advance!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3767015.html
Sent from the Solr - User mailing list archive at Nabble.com.
).
Anybody have any insight into why the newer versions are performing a bit
slower?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Improvement-and-degradation-Help-tp3767015p3767725.html
Sent from the Solr - User mailing list archive at Nabble.com.
, and how large is the term index for your
searches? How many documents do you get with each search? And, do you
use filter queries- these are very powerful at limiting searches.
2012/2/7 James ljatreey...@163.com:
Is there any practice to load index into RAM to accelerate solr
performance
searches.
2012/2/7 James ljatreey...@163.com:
Is there any practice to load index into RAM to accelerate solr
performance?
The over all documents is about 100 million. The search time around
100ms. I am seeking some method to accelerate the respond time for solr.
Just check that there is some
documents do you get with each search? And, do you
use filter queries- these are very powerful at limiting searches.
2012/2/7 James ljatreey...@163.com:
Is there any practice to load index into RAM to accelerate solr
performance?
The over all documents is about 100 million. The search
into RAM to accelerate solr
performance?
The over all documents is about 100 million. The search time around 100ms.
I am seeking some method to accelerate the respond time for solr.
Just check that there is some practice use SSD disk. And SSD is also cost
much, just want to know is there some method
On 08/02/2012 09:17, Ted Dunning wrote:
This is true with Lucene as it stands. It would be much faster if there
were a specialized in-memory index such as is typically used with high
performance search engines.
This could be implemented in Lucene trunk as a Codec. The challenge
though is to
large is the term index for your
searches? How many documents do you get with each search? And, do you
use filter queries- these are very powerful at limiting searches.
2012/2/7 James ljatreey...@163.com:
Is there any practice to load index into RAM to accelerate solr
performance?
The over
Add this as well:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.155.5030
On Wed, Feb 8, 2012 at 1:56 AM, Andrzej Bialecki a...@getopt.org wrote:
On 08/02/2012 09:17, Ted Dunning wrote:
This is true with Lucene as it stands. It would be much faster if there
were a specialized
into RAM to accelerate solr performance?
The over all documents is about 100 million. The search time around 100ms. I
am seeking some method to accelerate the respond time for solr.
Just check that there is some practice use SSD disk. And SSD is also cost
much, just want to know is there some
On 11/22/2011 11:52 PM, Husain, Yavar wrote:
Hi Shawn
That was so great of you to explain the architecture in such a detail. I
enjoyed reading it multiple times.
I have a question here:
You mentioned that we can use crc32(DocumentId)% NumServers. Now actually I am
using that in my
.
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Monday, November 21, 2011 7:47 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Performance/Architecture
On 11/21/2011 12:41 AM, Husain, Yavar wrote:
Number of rows in SQL Table (Indexed till now using Solr): 1
On 11/21/2011 12:41 AM, Husain, Yavar wrote:
Number of rows in SQL Table (Indexed till now using Solr): 1 million
Total Size of Data in the table: 4GB
Total Index Size: 3.5 GB
Total Number of Rows that I have to index: 20 Million (approximately 100 GB
Data) and growing
What is the best
Number of rows in SQL Table (Indexed till now using Solr): 1 million
Total Size of Data in the table: 4GB
Total Index Size: 3.5 GB
Total Number of Rows that I have to index: 20 Million (approximately 100 GB
Data) and growing
What is the best practices with respect to distributing the index?
Hi
I have one instance of solr running on JBoss with the following schema and
partial config:
Schema:
schema name=users_szukacz version=1.4
-
types
fieldType name=string class=solr.StrField sortMissingLast=true
omitNorms=true/
fieldType name=int class=solr.TrieIntField omitNorms=true
and 4 processors Intel Xeon 2.5GHz.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-performance-for-query-without-filter-tp3267785p3267785.html
Sent from the Solr - User mailing list archive at Nabble.com.
: Index has 41 000 000 documents and 9 GB size. For query like:
: 1)
:
*q=Jarecki+Jan*fq=sex:Mfq=confirmed:1fq=show_search:3fl=user_idstart=0rows=10wt=jsonversion=2.2
:
: server reaches avarage *90 query/s* on 4 theards and is very small for me.
:
: For query with filer on filed city:
: 2) ex.
Anyone tried this? I can not start Solr-Tomcat with following options on
Ubuntu:
JAVA_OPTS=$JAVA_OPTS -Xms2048m -Xmx2048m -Xmn256m -XX:MaxPermSize=256m
JAVA_OPTS=$JAVA_OPTS -Dsolr.solr.home=/data/solr -Dfile.encoding=UTF8
-Duser.timezone=GMT
Don't use this option, these optimizations are buggy:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7070134
On Wed, Jul 27, 2011 at 3:56 PM, Fuad Efendi f...@efendi.ca wrote:
Anyone tried this? I can not start Solr-Tomcat with following options on
Ubuntu:
JAVA_OPTS=$JAVA_OPTS -Xms2048m
Thanks Robert!!!
Submitted On 26-JUL-2011 - yesterday.
This option was popular in Hbase
On 11-07-27 3:58 PM, Robert Muir rcm...@gmail.com wrote:
Don't use this option, these optimizations are buggy:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7070134
On Wed, Jul 27, 2011 at 3:56
On Wed, Jul 27, 2011 at 4:12 PM, Fuad Efendi f...@efendi.ca wrote:
Thanks Robert!!!
Submitted On 26-JUL-2011 - yesterday.
This option was popular in HbaseŠ
Then you should tell them also, not to use it, if they want their loops to work.
--
lucidimagination.com
Erickson [mailto:erickerick...@gmail.com]
Sent: Friday, June 03, 2011 9:41 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr performance tuning - disk i/o?
This doesn't seem right. Here's a couple of things to try:
1 attach debugQuery=on to your long-running queries. The QTime
returned
201 - 300 of 464 matches
Mail list logo