But my concern is this, when we have just 2 servers:
 - I want 1 to be able to take over in case the other fails, as you point
out.
 - But when *both* servers are up I don't want the SolrCloud load balancer
to have Shard1 and Replica2 do the work (as they would both reside on the
same physical server).

Does that make sense? I want *both* server1 & server2 sharing the processing
of every request, *and* I want the failover capability.

I'm probably missing some bit of logic here, but I want to be sure I
understand the architecture.

Dave



-----Original Message-----
From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com] 
Sent: Thursday, April 18, 2013 8:13 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrCloud loadbalancing, replication, and failover

Correct. This is what you want if server 2 goes down.

Otis
Solr & ElasticSearch Support
http://sematext.com/
On Apr 18, 2013 3:11 AM, "David Parks" <davidpark...@yahoo.com> wrote:

> Step 1: distribute processing
>
> We have 2 servers in which we'll run 2 SolrCloud instances on.
>
> We'll define 2 shards so that both servers are busy for each request 
> (improving response time of the request).
>
>
>
> Step 2: Failover
>
> We would now like to ensure that if either of the servers goes down 
> (we're very unlucky with disks), that the other will be able to take 
> over automatically.
>
> So we define 2 shards with a replication factor of 2.
>
>
>
> So we have:
>
> .         Server 1: Shard 1, Replica 2
>
> .         Server 2: Shard 2, Replica 1
>
>
>
> Question:
>
> But in SolrCloud, replicas are active right? So isn't it now possible 
> that the load balancer will have Server 1 process *both* parts of a 
> request, after all, it has both shards due to the replication, right?
>
>

Reply via email to