Hi,
I'm using Solr 5.2.1, and I've indexed about 1GB of data into Solr.
However, I find that clustering is exceeding slow after I index this 1GB of
data. It took almost 30 seconds to return the cluster results when I set it
to cluster the top 1000 records, and still take more than 3 seconds when
he linux machine is preferred to have 1.5 times RAM
> with respect to index size. So, to verify this, I tried testing the solr
> performance in different volumes of RAM allocation keeping other
> configuration (i.e Solid State Drives, 8 core processor, 64-Bit) to be same
> in both the ca
On 4/17/2015 8:14 PM, Kamal Kishore Aggarwal wrote:
Hi,
As per this article, the linux machine is preferred to have 1.5 times RAM
with respect to index size. So, to verify this, I tried testing the solr
performance in different volumes of RAM allocation keeping other
configuration (i.e Solid
2015 07:45, "Kamal Kishore Aggarwal"
wrote:
> Hi,
>
> As per this article, the linux machine is preferred to have 1.5 times RAM
> with respect to index size. So, to verify this, I tried testing the solr
> performance in different volumes of RAM allocation keeping other
&
Hi,
As per this article, the linux machine is preferred to have 1.5 times RAM
with respect to index size. So, to verify this, I tried testing the solr
performance in different volumes of RAM allocation keeping other
configuration (i.e Solid State Drives, 8 core processor, 64-Bit) to be same
in
On 2/27/2015 12:51 PM, Tang, Rebecca wrote:
> Thank you guys for all the suggestions and help! I'Ve identified the main
> culprit with debug=timing. It was the mlt component. After I removed it,
> the speed of the query went back to reasonable. Another culprit is the
> expand component, but I ca
>> > https://wiki.apache.org/solr/SolrPerformanceProblems
>> >
>> > perhaps going back to a very vanilla/default solr configuration and
>> building back up from that baseline to better isolate what might
>>specific
>> setting be impacting your environment
&
ack up from that baseline to better isolate what might specific
> setting be impacting your environment
> >
> > ____
> > From: Tang, Rebecca
> > Sent: Wednesday, February 25, 2015 11:44
> > To: solr-user@lucene.apache.org
> > Su
_
> From: Tang, Rebecca
> Sent: Wednesday, February 25, 2015 11:44
> To: solr-user@lucene.apache.org
> Subject: RE: how to debug solr performance degradation
>
> Sorry, I should have been more specific.
>
> I was referring to the solr admin UI page. Today we started
25, 2015 11:44
To: solr-user@lucene.apache.org
Subject: RE: how to debug solr performance degradation
Sorry, I should have been more specific.
I was referring to the solr admin UI page. Today we started up an AWS
instance with 240 G of memory to see if we fit all of our index (183G) in
the memory
-user@lucene.apache.org
Subject: RE: how to debug solr performance degradation
Sorry, I should have been more specific.
I was referring to the solr admin UI page. Today we started up an AWS
instance with 240 G of memory to see if we fit all of our index (183G) in
the memory and have enough for the JMV, cou
_
From: Shawn Heisey [apa...@elyograg.org]
Sent: Tuesday, February 24, 2015 5:23 PM
To: solr-user@lucene.apache.org
Subject: Re: how to debug solr performance degradation
On 2/24/2015 5:45 PM, Tang, Rebecca wrote:
> We gave the machine 180G mem to see if it improves performance.
meant to type "JMX or sflow agent"
also should have mentioned you want to be running a very recent JDK
From: Boogie Shafer
Sent: Tuesday, February 24, 2015 18:03
To: solr-user@lucene.apache.org
Subject: Re: how to debug solr performance d
ry 24, 2015 17:06
To: solr-user@lucene.apache.org
Subject: Re: how to debug solr performance degradation
Rebecca
You don’t want to give all the memory to the JVM. You want to give it just
enough for it to work optimally and leave the rest of the memory for the OS to
use for caching data. Giving th
On 2/24/2015 5:45 PM, Tang, Rebecca wrote:
> We gave the machine 180G mem to see if it improves performance. However,
> after we increased the memory, Solr started using only 5% of the physical
> memory. It has always used 90-something%.
>
> What could be causing solr to not grab all the physical
Rebecca
You don’t want to give all the memory to the JVM. You want to give it just
enough for it to work optimally and leave the rest of the memory for the OS to
use for caching data. Giving the JVM too much memory can result in worse
performance because of GC. There is no magic formula to figu
Be careful what you think is being used by Solr since Lucene uses
MMapDirectories under the covers, and this means you might be seeing
virtual memory. See Uwe's excellent blog here:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
Best,
Erick
On Tue, Feb 24, 2015 at 5:02 PM
The other memory is used by the OS as file buffers. All the important parts of
the on-disk search index are buffered in memory. When the Solr process wants a
block, it is already right there, no delays for disk access.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/
We gave the machine 180G mem to see if it improves performance. However,
after we increased the memory, Solr started using only 5% of the physical
memory. It has always used 90-something%.
What could be causing solr to not grab all the physical memory (grabbing
so little of the physical memory)?
On 2/24/2015 1:09 PM, Tang, Rebecca wrote:
> Our solr index used to perform OK on our beta production box (anywhere
> between 0-3 seconds to complete any query), but today I noticed that the
> performance is very bad (queries take between 12 – 15 seconds).
>
> I haven't updated the solr index con
example).
> I want to pinpoint where the performance issue is coming from. Could I have
> some suggestions/help on how to benchmark/debug solr performance issues.
Rough checking of IOWait and CPU load is a fine starting point. If if is CPU
load then you can turn on debug in Solr admin, wh
tions/help on how to benchmark/debug solr performance issues.
Thank you,
Rebecca Tang
Applications Developer, UCSF CKM
Industry Documents Digital Libraries
E: rebecca.t...@ucsf.edu
Mahmoud Almokadem [prog.mahm...@gmail.com] wrote:
> I've the same index with a bit different schema and 200M documents,
> installed on 3 r3.xlarge (30GB RAM, and 600 General Purpose SSD). The size
> of index is about 1.5TB, have many updates every 5 minutes, complex queries
> and faceting with resp
On 12/29/2014 12:07 PM, Mahmoud Almokadem wrote:
> What do you mean with "important parts of index"? and how to calculate their
> size?
I have no formal education in what's important when it comes to doing a
query, but I can make some educated guesses.
Starting with this as a reference:
http://
Thanks Shawn.
What do you mean with "important parts of index"? and how to calculate their
size?
Thanks,
Mahmoud
Sent from my iPhone
> On Dec 29, 2014, at 8:19 PM, Shawn Heisey wrote:
>
>> On 12/29/2014 2:36 AM, Mahmoud Almokadem wrote:
>> I've the same index with a bit different schema and
On 12/29/2014 2:36 AM, Mahmoud Almokadem wrote:
> I've the same index with a bit different schema and 200M documents,
> installed on 3 r3.xlarge (30GB RAM, and 600 General Purpose SSD). The size
> of index is about 1.5TB, have many updates every 5 minutes, complex queries
> and faceting with respon
Thanks all.
I've the same index with a bit different schema and 200M documents,
installed on 3 r3.xlarge (30GB RAM, and 600 General Purpose SSD). The size
of index is about 1.5TB, have many updates every 5 minutes, complex queries
and faceting with response time of 100ms that is acceptable for us.
Mahmoud Almokadem [prog.mahm...@gmail.com] wrote:
> We've installed a cluster of one collection of 350M documents on 3
> r3.2xlarge (60GB RAM) Amazon servers. The size of index on each shard is
> about 1.1TB and maximum storage on Amazon is 1 TB so we add 2 SSD EBS
> General purpose (1x1TB + 1x500G
On 12/26/2014 7:17 AM, Mahmoud Almokadem wrote:
> We've installed a cluster of one collection of 350M documents on 3
> r3.2xlarge (60GB RAM) Amazon servers. The size of index on each shard is
> about 1.1TB and maximum storage on Amazon is 1 TB so we add 2 SSD EBS
> General purpose (1x1TB + 1x500GB)
Likely lots of disk + network IO, yes. Put SPM for Solr on your nodes to double
check.
Otis
> On Dec 26, 2014, at 09:17, Mahmoud Almokadem wrote:
>
> Dears,
>
> We've installed a cluster of one collection of 350M documents on 3
> r3.2xlarge (60GB RAM) Amazon servers. The size of index on eac
Dears,
We've installed a cluster of one collection of 350M documents on 3
r3.2xlarge (60GB RAM) Amazon servers. The size of index on each shard is
about 1.1TB and maximum storage on Amazon is 1 TB so we add 2 SSD EBS
General purpose (1x1TB + 1x500GB) on each instance. Then we create logical
volume
”
> condition looks very normal at hindsight
> ** spot your long-running queries, optimise them, re-run your tests
> ** check your cache warming and how fast you start your load injector
> threads
>
> Cheers,
>
> Siegfried Goeschl
>
>
> On 13 Jul 2014, at 09:
, optimise them, re-run your tests
** check your cache warming and how fast you start your load injector threads
Cheers,
Siegfried Goeschl
On 13 Jul 2014, at 09:53, rashi gandhi wrote:
> Hi,
>
> I am using SolrMeter for load/stress testing solr performance.
> Tomcat is configured
Hi,
I am using SolrMeter for load/stress testing solr performance.
Tomcat is configured with default "maxThreads" (i.e. 200).
I set Intended Request per min in SolrMeter to 1500 and performed testing.
I found that sometimes it works with this much load on solr but sometimes
it g
I'm pretty much lost, please add some details:
1> 27-50 rpm. Queries? Updates?
2> what kinds of updates are happening if <1> is queries?
3> The various mail systems often strip screenshots, I don't see it.
4> What are you measuring anyway? QTime? Time for response to
come back?
5> are your log
I run a small solr cloud cluster (4.5) of 3 nodes, 3 collections with 3
shards each. Total index size per node is about 20GB with about 70M
documents.
In regular traffic (27-50 rpm) the performance is ok and response time
ranges from 100 to 500ms.
But when I start loading (overwriting) 70M documen
On 6/8/2014 12:09 PM, rashi gandhi wrote:
> I am using SolrMeter for performance benchmarking. I am able to
> successfully test my solr setup up to 1000 queries per min while
> searching.
> But when I am exceeding this limit say 1500 search queries per min,
> facing "Server Refused Connection" in S
To be of any help we'd need to know what your documents look like, what
your queries look like, what is the specifications of your server? How much
heap is dedicated to Solr, how much free memory is available for the OS
file cache. You have to figure out the bottleneck. Is it CPU or RAM or
Disk? Ma
Hi,
I am using SolrMeter for performance benchmarking. I am able to
successfully test my solr setup up to 1000 queries per min while
searching.
But when I am exceeding this limit say 1500 search queries per min,
facing "Server Refused Connection" in SOLR.
Currently, I have only one solr server run
I think multiValue is copied multi values, index is bigger and query easy,
but performance may worse, but it depends on how to using.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-performance-multiValued-filed-vs-separate-fields-tp4136121p4137289.html
Sent from the
On Thu, May 15, 2014 at 10:29 AM, danny teichthal wrote:
> I wonder about performance difference of 2 indexing options: 1- multivalued
> field 2- separate fields
>
> The case is as follows: Each document has 100 “properties”: prop1..prop100.
> The values are strings and there is no relation betwee
On 5/15/2014 8:29 AM, danny teichthal wrote:
> I wonder about performance difference of 2 indexing options: 1- multivalued
> field 2- separate fields
>
> The case is as follows: Each document has 100 “properties”: prop1..prop100.
> The values are strings and there is no relation between different
I wonder about performance difference of 2 indexing options: 1- multivalued
field 2- separate fields
The case is as follows: Each document has 100 “properties”: prop1..prop100.
The values are strings and there is no relation between different
properties. I would like to search by exact match on se
B of data that I want to index.
> Assuming I have enough space for storing the indexes in a single machine.
> *I would like to get an idea about Solr performance for searching an item
> from a huge data set.
> Do I need to use shards for improving the Solr search efficiency or it is
>
Hello Dear,
I have 1000 GB of data that I want to index.
Assuming I have enough space for storing the indexes in a single machine.
*I would like to get an idea about Solr performance for searching an item
from a huge data set.
Do I need to use shards for improving the Solr search efficiency or it
yur
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolR-performance-problem-tp4114459.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
ideas?
Please help, how to tackle this.
Thanks,
Mayur
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolR-performance-problem-tp4114459.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Furkan. Looking forward to seeing your test results.
Sent from Yahoo Mail on Android
Hi Hien;
Actually high index rate is a relative concept. I could index such kind of
data within a few hours. I aim to index much much more data within same
time soon. I can share my test results when I do.
Thanks;
Furkan KAMACI
6 Aralık 2013 Cuma tarihinde Hien Luu adlı kullanıcı şöyle
yazdı:
>
On 12/5/2013 4:08 PM, Hien Luu wrote:
Just curious what was the index rate that you were able to achieve?
What I've usually seen based on my experience and what people have said
here and on IRC is that the data source is usually the bottleneck - Solr
typically indexes VERY fast, as long as yo
Hi Furkan,
Just curious what was the index rate that you were able to achieve?
Regards,
Hien
On Thursday, December 5, 2013 3:06 PM, Furkan KAMACI
wrote:
Hi;
Erick and Shawn have explained that we need more information about your
infrastructure. I should add that: I had test data at my
Hi;
Erick and Shawn have explained that we need more information about your
infrastructure. I should add that: I had test data at my SolrCloud nearly
as much as yours and I did not have any problems except for when indexing
at a huge index rate and it can be solved with turning. You should optimiz
On 12/4/2013 6:31 AM, kumar wrote:
> I am having almost 5 to 6 crores of indexed documents in solr. And when i am
> going to change anything in the configuration file solr server is going
> down.
If you mean crore and not core, then you are talking about 50 to 60
million documents. That's a lot.
Exception: Cannot call sendError() after
> the response has been committed] with root cause
>
>
>
> Can anybody help me how can i solve this problem.
>
> Kumar.
>
>
>
>
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-Performance-Issue-tp4104907.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
threw
exception [java.lang.IllegalStateException: Cannot call sendError() after
the response has been committed] with root cause
Can anybody help me how can i solve this problem.
Kumar.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Performance-Issue-tp4104907.html
Sent from
Martin Fowler and Sadagale has a nice book about such kind of architectural
designs: NoSQL Distilled Emerging Polyglot Persistence.If you read it you
will see why to use a NoSQL or an RDBMS or both of them. On the other hand
I have over 50+ millions of documents at a replicated nodes of SolrCloud
a
Setting asside the excellent responses that have already been made in this
thread, there are fundemental discrepencies in what you are comparing in
your respective timing tests.
first off: a micro benchmark like this is virtually useless -- unless you
really plan on only ever executing a singl
On Wed, 2013-09-04 at 14:06 +0200, Sergio Stateri wrote:
> I´m trying to change the data access in the company where I work from
> Oracle to Solr.
They work on different principles and fulfill different needs. Comparing
them by a performance oriented test are not likely to be usable point
for sele
You said nothing about your enviroments (e.g. operating systems, what
kind of Oracle installation you have, whar kind of SOLR installation,
how many data in database, how many documents in index, RAM for SOLR,
for Oracle, for OS, and in general hardware...and so on)...
Anyway...a migration fro
Hi,
I´m trying to change the data access in the company where I work from
Oracle to Solr. Then I make some test, like this:
In Oracle:
private void go() throws Exception {
Class.forName("oracle.jdbc.driver.OracleDriver");
Connection conn =
DriverManager.getConnection("XXX
Hi Roman,
Ok, I will. Thanks!
Cheers,
Dmitry
On Tue, Sep 3, 2013 at 4:46 PM, Roman Chyla wrote:
> Hi Dmitry,
>
> Thanks for the feedback. Yes, it is indeed jmeter issue (or rather, the
> issue of the plugin we use to generate charts). You may want to use the
> github for whatever comes next
>
Hi Dmitry,
Thanks for the feedback. Yes, it is indeed jmeter issue (or rather, the
issue of the plugin we use to generate charts). You may want to use the
github for whatever comes next
https://github.com/romanchyla/solrjmeter/issues
Cheers,
roman
On Tue, Sep 3, 2013 at 7:54 AM, Dmitry Kan
Hi Roman,
Thanks, the --additionalSolrParams was just what I wanted and works fine.
BTW, if you have some special "bug tracking forum" for the tool, I'm happy
to submit questions / bug reports there. Otherwise, this email list is ok
(for me at least).
One other thing I have noticed in the err lo
Hi Dmitry,
If it is something you want to pass with every request (which is my use
case), you can pass it as additional solr params, eg.
python solrjmeter
--additionalSolrParams="fq=other_field:bar+facet=true+facet.field=facet_field_name"
the string should be url encoded.
If it is somethin
Hi Erick,
Agree, this is perfectly fine to mix them in solr. But my question is about
solrjmeter input query format. Just couldn't find a suitable example on the
solrjmeter's github.
Dmitry
On Mon, Sep 2, 2013 at 5:40 PM, Erick Erickson wrote:
> filter and facet queries can be freely intermix
filter and facet queries can be freely intermixed, it's not a problem.
What problem are you seeing when you try this?
Best,
Erick
On Mon, Sep 2, 2013 at 7:46 AM, Dmitry Kan wrote:
> Hi Roman,
>
> What's the format for running the facet+filter queries?
>
> Would something like this work:
>
> fi
Hi Roman,
What's the format for running the facet+filter queries?
Would something like this work:
field:foo >=50 fq=other_field:bar facet=true facet.field=facet_field_name
Thanks,
Dmitry
On Fri, Aug 23, 2013 at 2:34 PM, Dmitry Kan wrote:
> Hi Roman,
>
> With adminPath="/admin" or adminP
Hi Roman,
With adminPath="/admin" or adminPath="/admin/cores", no. Interestingly
enough, though, I can access
http://localhost:8983/solr/statements/admin/system
But I can access http://localhost:8983/solr/admin/cores, only when with
adminPath="/admin/cores" (which suggests that this is the right
Hi Dmitry,
So it seems solrjmeter should not assume the adminPath - and perhaps needs
to be passed as an argument. When you set the adminPath, are you able to
access localhost:8983/solr/statements/admin/cores ?
roman
On Wed, Aug 21, 2013 at 7:36 AM, Dmitry Kan wrote:
> Hi Roman,
>
> I have not
On Wed, 2013-08-21 at 10:09 +0200, sivaprasad wrote:
> The slave will poll for every 1hr.
And are there normally changes?
> We have configured ~2000 facets and the machine configuration is given
> below.
I assume that you only request a subset of those facets at a time.
How much RAM does your
I'd like to see a screen shot of a search results web page that has 2,000
facets.
-- Jack Krupansky
-Original Message-
From: Erick Erickson
Sent: Wednesday, August 21, 2013 11:24 AM
To: solr-user@lucene.apache.org
Subject: Re: Facing Solr performance during query search
~
quot;false"
> termVectors="true"/>
> stored="true"/>
> stored="true"/>
> stored="true"/>
> stored="true"/>
> stored="true"/>
> stored="true&q
Hi Roman,
I have noticed a difference with different solr.xml config contents. It is
probably legit, but thought to let you know (tests run on fresh checkout as
of today).
As mentioned before, I have two cores configured in solr.xml. If the file
is:
[code]
[/code]
then the
/Facing-Solr-performance-during-query-search-tp4085426p4085825.html
Sent from the Solr - User mailing list archive at Nabble.com.
uld be the reason?
>
> And, how to disable optimizing the index, warming the searcher and cache on
> Slave?
>
> Regards,
> Siva
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Facing-Solr-performance-during-query-search-tp4085426.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
searcher and cache on
Slave?
Regards,
Siva
--
View this message in context:
http://lucene.472066.n3.nabble.com/Facing-Solr-performance-during-query-search-tp4085426.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Roman,
This looks much better, thanks! The ordinary non-comarison mode works. I'll
post here, if there are other findings.
Thanks for quick turnarounds,
Dmitry
On Wed, Aug 14, 2013 at 1:32 AM, Roman Chyla wrote:
> Hi Dmitry, oh yes, late night fixes... :) The latest commit should make it
Hi Dmitry, oh yes, late night fixes... :) The latest commit should make it
work for you.
Thanks!
roman
On Tue, Aug 13, 2013 at 3:37 AM, Dmitry Kan wrote:
> Hi Roman,
>
> Something bad happened in fresh checkout:
>
> python solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q
> ./queries/demo/demo.qu
Hi Roman,
Something bad happened in fresh checkout:
python solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q
./queries/demo/demo.queries -s localhost -p 8983 -a --durationInSecs 60 -R
cms -t /solr/statements -e statements -U 100
Traceback (most recent call last):
File "solrjmeter.py", line 1392,
Hi Dmitry,
On Mon, Aug 12, 2013 at 9:36 AM, Dmitry Kan wrote:
> Hi Roman,
>
> Good point. I managed to run the command with -C and double quotes:
>
> python solrjmeter.py -a -C "g1,cms" -c hour -x ./jmx/SolrQueryTest.jmx
>
> As a result got several files (html, css, js, csv) in the running dir
Hi Roman,
Good point. I managed to run the command with -C and double quotes:
python solrjmeter.py -a -C "g1,cms" -c hour -x ./jmx/SolrQueryTest.jmx
As a result got several files (html, css, js, csv) in the running directory
(any way to specify where the output should be stored in this case?)
W
Hi Dmitry,
The command seems good. Are you sure your shell is not doing something
funny with the params? You could try:
python solrjmeter.py -C "g1,foo" -c hour -x ./jmx/SolrQueryTest.jmx -a
where g1 and foo are results of the individual runs, ie. something that was
started and saved with '-R g1'
Hi Roman,
One more question. I tried to compare different runs (g1 vs cms) using the
command below, but get an error. Should I attach some other param(s)?
python solrjmeter.py -C g1,foo -c hour -x ./jmx/SolrQueryTest.jmx
**ERROR**
File "solrjmeter.py", line 1427, in
main(sys.argv)
File
Hi Roman,
Finally, this has worked! Thanks for quick support.
The graphs look awesome. At least on the index sample :) It is quite easy
to setup and run + possible to run directly on the shard server in
background mode.
my test run was:
python solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q
./qu
Hi Dmitry,
I've modified the solrjmeter to retrieve data from under the core (the -t
parameter) and the rest from the /solr/admin - I could test it only against
4.0, but it is there the same as 4.3 - it seems...so you can try the fresh
checkout
my test was: python solrjmeter.py -a -x ./jmx/SolrQu
Hi,
Thanks for the clarification, Shawn!
So with this in mind, the following work:
http://localhost:8983/solr/statements/admin/system?wt=json
http://localhost:8983/solr/statements/admin/mbeans?wt=json
not copying their output to save space.
Roman:
is this something that should be set via -t p
On 8/6/2013 6:17 AM, Dmitry Kan wrote:
> Of three URLs you asked for, only the 3rd one gave response:
> The rest report 404.
>
> On Mon, Aug 5, 2013 at 8:38 PM, Roman Chyla wrote:
>
>> Hi Dmitry,
>> So I think the admin pages are different on your version of solr, what do
>> you see when you re
Hi Roman,
With fresh checkout, the reported admin_endpoint is:
http://localhost:8983/solr/admin. This url redirects to
http://localhost:8983/solr/#/ . I'm using solr 4.3.1. Is your tool
supporting this version?
Of three URLs you asked for, only the 3rd one gave response:
{"responseHeader":{"stat
Hi Dmitry,
So I think the admin pages are different on your version of solr, what do
you see when you request... ?
http://localhost:8983/solr/admin/system?wt=json
http://localhost:8983/solr/admin/mbeans?wt=json
http://localhost:8983/solr/admin/cores?wt=json
If your core -t was '/solr/statements',
Hi Roman,
No problem. Still trying to launch the thing..
The query with the added -t parameter generated an error:
1. python solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q
./queries/demo/demo.queries -s localhost -p 8983 -a --durationInSecs 60 -R
test -t /solr/statements [passed relative path
Hi Dmitry,
Thanks, It was a toothing problem, fixed now, please try the fresh checkout
AND add the following to your arguments: -t /solr/core1
that sets the path under which solr should be contacted, the handler is set
in the jmeter configuration, so if you were using different query handlers
tha
Hi Roman,
Sure:
python solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q
/home/dmitry/projects/lab/solrjmeter/queries/demo/demo.queries -s localhost
-p 8983 -a --durationInSecs 60 -R test
This is vanilla install (git clone) except for one change that I had to do
related to solr cores:
> git diff
d
8 PM, Roman Chyla wrote:
> >
> >> Hi, here is a short post describing the results of the yesterday run
> with
> >> added parameters as per Shawn's recommendation, have fun getting
> confused
> >> ;)
> >>
> >>
> http://29min.wordpress.co
gt;> http://29min.wordpress.com/**2013/08/01/measuring-solr-**performance-ii/<http://29min.wordpress.com/2013/08/01/measuring-solr-performance-ii/>
>>
>
> I am having a very difficult time with the graphs. I have no idea what
> I'm looking at. The graphs are probably
On 8/1/2013 2:08 PM, Roman Chyla wrote:
Hi, here is a short post describing the results of the yesterday run with
added parameters as per Shawn's recommendation, have fun getting confused ;)
http://29min.wordpress.com/2013/08/01/measuring-solr-performance-ii/
I am having a very difficult
Hi, here is a short post describing the results of the yesterday run with
added parameters as per Shawn's recommendation, have fun getting confused ;)
http://29min.wordpress.com/2013/08/01/measuring-solr-performance-ii/
roman
On Wed, Jul 31, 2013 at 12:32 PM, Roman Chyla wrote:
> I
Hi Bernd,
On Thu, Aug 1, 2013 at 4:07 AM, Bernd Fehling <
bernd.fehl...@uni-bielefeld.de> wrote:
> Yes, UseNuma is only for Parallel Scavenger garbage collector and only
> for Solaris 9 and higher and Linux kernel 2.6.19 and glibc 2.6.1.
> And it performs with 64-bit better than 32-bit.
> So no
Dmitry,
Can you post the entire invocation line?
roman
On Thu, Aug 1, 2013 at 7:46 AM, Dmitry Kan wrote:
> Hi Roman,
>
> When I try to run with -q
> /home/dmitry/projects/lab/solrjmeter/queries/demo/demo.queries
>
> here what is reported:
> Traceback (most recent call last):
> File "solrjmete
Hi Roman,
When I try to run with -q
/home/dmitry/projects/lab/solrjmeter/queries/demo/demo.queries
here what is reported:
Traceback (most recent call last):
File "solrjmeter.py", line 1390, in
main(sys.argv)
File "solrjmeter.py", line 1309, in main
tests = find_tests(options)
File
Yes, UseNuma is only for Parallel Scavenger garbage collector and only
for Solaris 9 and higher and Linux kernel 2.6.19 and glibc 2.6.1.
And it performs with 64-bit better than 32-bit.
So no effects for G1.
With standard applications CMS is very slightly better than G1 but
when it comes to huge he
101 - 200 of 482 matches
Mail list logo