On 4/28/2015 4:40 AM, shacky wrote:
> I'm using Solr for 3 years and now I want to move to a SolrCloud
> configuration on 3 nodes which would make my infrastructure highly
> available.
> But I am very confused about it.
> 
> I read that ZooKeeper should not be installed on the same Solr nodes,
> but I also read another guide that installs one ZooKeeper instance and
> 2 Solr instance, so I cannot understand how it can be completely
> redundant.
> I also read the SolrCloud quick start guide (which installs N nodes on
> the same server), but I am still confused about what I need to do to
> configure the production nodes.


Erick's reply is spot on.

My two cents:

Solr can work perfectly when zookeeper is running on the same nodes.  It
can even work perfectly if you choose to run the embedded zookeeper on
all three nodes and configure them into an ensemble, but the embedded
zookeeper is not recommended at all for a production SolrCloud.  I
personally think we should have never created the embedded zookeeper,
but it is very effective for quickly getting a test installation running.

One of the indexes that I maintain is the minimum possible redundant
SolrCloud install.  It consists of three servers in total.  Two of them
run both Solr and a separate zookeeper process, the third runs only
zookeeper, and is a much lower spec server than the other two.

The only real concern with running zookeeper on the same machine as Solr
is I/O bandwidth.  If you can put the zookeeper database on a completely
separate disk (or set of disks) from your Solr indexes, then that is not
a worry.  If the disk I/O on the server never gets high enough that it
would delay reads and writes on the zookeeper database, then you could
even have the zookeeper database on the same disk volume as Solr data.

My own SolrCloud install does not use separate disks.  It works because
there is plenty of RAM to cache my entire index and the zookeeper
database, so disk I/O isn't an issue.

http://wiki.apache.org/solr/SolrPerformanceProblems#SolrCloud

Thanks,
Shawn

Reply via email to