Yes, it uses a the autoscaling policies to achieve the same. Please refer to the documentation here https://lucene.apache.org/solr/guide/7_5/solrcloud-autoscaling-policy-preferences.html
On Thu, Sep 27, 2018, 02:11 Chuck Reynolds <creyno...@ancestry.com> wrote: > Noble, > > Are you saying in the latest version of Solr that this would work with > three instances of Solr running on each server? > > If so how? > > Thanks again for your help. > > On 9/26/18, 9:11 AM, "Noble Paul" <noble.p...@gmail.com> wrote: > > I'm not sure if it is pertinent to ask you to move to the latest Solr > which has the policy based replica placement. Unfortunately, I don't > have any other solution I can think of > > On Wed, Sep 26, 2018 at 11:46 PM Chuck Reynolds < > creyno...@ancestry.com> wrote: > > > > Noble, > > > > So other than manually moving replicas of shard do you have a > suggestion of how one might accomplish the multiple availability zone with > multiple instances of Solr running on each server? > > > > Thanks > > > > On 9/26/18, 12:56 AM, "Noble Paul" <noble.p...@gmail.com> wrote: > > > > The rules suggested by Steve is correct. I tested it locally and > I got > > the same errors. That means a bug exists probably. > > All the new development efforts are invested in the new policy > feature > > . > https://urldefense.proofpoint.com/v2/url?u=https-3A__lucene.apache.org_solr_guide_7-5F4_solrcloud-2Dautoscaling-2Dpolicy-2Dpreferences.html&d=DwIFaQ&c=kKqjBR9KKWaWpMhASkPbOg&r=J-2s3b-3-OTA0o6bGDhJXAQlB5Y3s4rOUxlh_78DJl0&m=yXVYNcm-dqN_lucLyuQI38EZfK4f8l4828Ty53e4plM&s=D1vfu3bOu_hOGAU2CIKPwqBTPkYiBeK1kOUoFnQZpKA&e= > > > > The old one is going to be deprecated pretty soon. So, I'm not > sure if > > we should be investing our resources here > > On Wed, Sep 26, 2018 at 1:23 PM Chuck Reynolds < > creyno...@ancestry.com> wrote: > > > > > > Shawn, > > > > > > Thanks for the info. We’ve been running this way for the past > 4 years. > > > > > > We were running on very large hardware, 20 physical cores with > 256 gigs of ram with 3 billion document and it was the only way we could > take advantage of the hardware. > > > > > > Running 1 Solr instance per server never gave us the > throughput we needed. > > > > > > So I somewhat disagree with your statement because our test > proved otherwise. > > > > > > Thanks for the info. > > > > > > Sent from my iPhone > > > > > > > On Sep 25, 2018, at 4:19 PM, Shawn Heisey < > apa...@elyograg.org> wrote: > > > > > > > >> On 9/25/2018 9:21 AM, Chuck Reynolds wrote: > > > >> Each server has three instances of Solr running on it so > every instance on the server has to be in the same replica set. > > > > > > > > You should be running exactly one Solr instance per server. > When evaluating rules for replica placement, SolrCloud will treat each > instance as completely separate from all others, including others on the > same machine. It will not know that those three instances are on the same > machine. One Solr instance can handle MANY indexes. > > > > > > > > There is only ONE situation where it makes sense to run > multiple instances per machine, and in my strong opinion, even that > situation should not be handled with multiple instances. That situation is > this: When running one instance would require a REALLY large heap. > Garbage collection pauses can become extreme in that situation, so some > people will run multiple instances that each have a smaller heap, and > divide their indexes between them. In my opinion, when you have enough > index data on an instance that it requires a huge heap, instead of running > two or more instances on one server, it's time to add more servers. > > > > > > > > Thanks, > > > > Shawn > > > > > > > > > > > > -- > > ----------------------------------------------------- > > Noble Paul > > > > > > > -- > ----------------------------------------------------- > Noble Paul > > >