Re: Errors During Load Test

2016-02-04 Thread Toke Eskildsen
Tiwari, Shailendra  wrote:
> We are on Solr 4.10.3. Got 2 load balanced RedHat with 16 GB
> memory on each. Memory assigned to JVM 4 GB, 2 Shards, 
> total docs 60 K, and 4 replicas.

As you are chasing throughput, you should aim to lower the overall resources 
needed for a single request, potentially at the cost of latency. Unless you 
have really large documents, very special queries or something else making this 
an outlier, you will be much better off with just 1 shard and 2 replicas. 
Having more that 1 shard introduces an overhead for each request and for such a 
small setup it is relatively large. 

- Toke Eskildsen


Re: Errors During Load Test

2016-02-04 Thread Binoy Dalal
What is your solr setup -- nodes/shards/specs?
7221 requests/min is a lot so it's likely that your solr setup simply isn't
able to support this kind of load which results in the requests timing out
which is why you keep seeing the timeout and connect exceptions.

On Thu, 4 Feb 2016, 20:30 Tiwari, Shailendra <
shailendra.tiw...@macmillan.com> wrote:

> Hi All,
>
> We did our first load test on Search (Solr) API, and started to see some
> errors after 2000 Users. Errors used to go away after 30 seconds, but keep
> happening frequently. Errors were "java.net.SocketTimeoutException" and
> "org.apache.http.conn.HttpHostConnectException". We were using JMeter to
> run the load test, and total of 15 different Search terms were used to
> execute API. Total Request/Min was 7221/min.
> We are using Apache/RedHat.
> We want to scale upto 4000 users. What's recommendation to reach there?
>
> Thanks
>
> Shail
>
-- 
Regards,
Binoy Dalal


Re: Errors During Load Test

2016-02-04 Thread Erick Erickson
The short form is "add more replicas", assuming you're using SolrCloud.

If older-style master/slave, then "add more slaves". Solr request processing
scales pretty linearly with the number of replicas (or slaves).

Note that this is _not_ adding shards (assuming SolrCloud). You usually add
shards when your response time under light load is unacceptable indicating
that you need fewer documents in each shard.

Biony's question needs to be answered before any but the most general
advice is possible, what is your setup? What
version of Solr? How many docs? How many shards? etc.

Best,
Erick

On Thu, Feb 4, 2016 at 7:06 AM, Binoy Dalal  wrote:
> What is your solr setup -- nodes/shards/specs?
> 7221 requests/min is a lot so it's likely that your solr setup simply isn't
> able to support this kind of load which results in the requests timing out
> which is why you keep seeing the timeout and connect exceptions.
>
> On Thu, 4 Feb 2016, 20:30 Tiwari, Shailendra <
> shailendra.tiw...@macmillan.com> wrote:
>
>> Hi All,
>>
>> We did our first load test on Search (Solr) API, and started to see some
>> errors after 2000 Users. Errors used to go away after 30 seconds, but keep
>> happening frequently. Errors were "java.net.SocketTimeoutException" and
>> "org.apache.http.conn.HttpHostConnectException". We were using JMeter to
>> run the load test, and total of 15 different Search terms were used to
>> execute API. Total Request/Min was 7221/min.
>> We are using Apache/RedHat.
>> We want to scale upto 4000 users. What's recommendation to reach there?
>>
>> Thanks
>>
>> Shail
>>
> --
> Regards,
> Binoy Dalal


RE: Errors During Load Test

2016-02-04 Thread Tiwari, Shailendra
We are on Solr 4.10.3. Got 2 load balanced RedHat with 16 GB memory on each. 
Memory assigned to JVM 4 GB, 2 Shards, total docs 60 K, and 4 replicas.

Thanks 

Shail

-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Thursday, February 04, 2016 1:27 PM
To: solr-user
Subject: Re: Errors During Load Test

The short form is "add more replicas", assuming you're using SolrCloud.

If older-style master/slave, then "add more slaves". Solr request processing 
scales pretty linearly with the number of replicas (or slaves).

Note that this is _not_ adding shards (assuming SolrCloud). You usually add 
shards when your response time under light load is unacceptable indicating that 
you need fewer documents in each shard.

Biony's question needs to be answered before any but the most general advice is 
possible, what is your setup? What version of Solr? How many docs? How many 
shards? etc.

Best,
Erick

On Thu, Feb 4, 2016 at 7:06 AM, Binoy Dalal <binoydala...@gmail.com> wrote:
> What is your solr setup -- nodes/shards/specs?
> 7221 requests/min is a lot so it's likely that your solr setup simply 
> isn't able to support this kind of load which results in the requests 
> timing out which is why you keep seeing the timeout and connect exceptions.
>
> On Thu, 4 Feb 2016, 20:30 Tiwari, Shailendra < 
> shailendra.tiw...@macmillan.com> wrote:
>
>> Hi All,
>>
>> We did our first load test on Search (Solr) API, and started to see 
>> some errors after 2000 Users. Errors used to go away after 30 
>> seconds, but keep happening frequently. Errors were 
>> "java.net.SocketTimeoutException" and 
>> "org.apache.http.conn.HttpHostConnectException". We were using JMeter 
>> to run the load test, and total of 15 different Search terms were used to 
>> execute API. Total Request/Min was 7221/min.
>> We are using Apache/RedHat.
>> We want to scale upto 4000 users. What's recommendation to reach there?
>>
>> Thanks
>>
>> Shail
>>
> --
> Regards,
> Binoy Dalal