Comments inline:

On Sun, Feb 15, 2015 at 2:09 AM, jaime spicciati <jaime.spicci...@gmail.com>
wrote:

> All,
> This is my current understanding of how SolrCloud load balancing works...
>
> Within SolrCloud, for a cluster with more than 1 shard and at least 1
> replica, the Zookeeper aware SolrJ client uses LBHTTPSolrServer which is
> round robin across the replicas and leaders in the cluster. In turn the
> shard (which can be a leader or replica) that performs the distributed
> query may then go to the leader or replica for each shard based on round
> robin via LBHTTPSolrServer.
>

Yes, you're right that CloudSolrServer (renamed to CloudSolrClient in the
next 5.0 release) uses the LBHttpSolrServer (also renamed to
LBHttpSolrClient in 5.0). However, due to the way it is used, the current
load balancing is more of a random selection than round robin.


>
> If this is correct then in a SolrCloud instance that has let's say 1
> replica, the initial query from the user may go to the leader for shard 1,
> then when the user paginates to the second page the subsequent query may go
> to the replica of shard 1. This seems inefficient from a caching
> perspective where the queryResultCache and possibly the filterCache would
> need to be reloaded.
>

If you have more than 1 replica for shard1 then yes, the first page may be
served from replica1 and the second may be served from replica2. And yes,
it is more inefficient from a caching perspective on the queryResultCache
and the first few requests pay the price. The filterCache has some top
level filters which are re-used across queries so that is probably not that
affected. It becomes even more complicated when you have more than 1 shard
because then each search itself is composed of multiple requests (for
scatter/gather) and each request may again select a random replica. There
are definitely some optimizations that can be explored here.


>
> From what I can find there does not appear to be any option of session
> affinity within the SolrCloud query execution?
>
>
> Thanks!
>



-- 
Regards,
Shalin Shekhar Mangar.

Reply via email to