Ended up working well with nodeset EMPTY and placing all replicas manually.
Thank you all for the assistance!
On Thu, Jun 14, 2018 at 9:28 AM, Jan Høydahl wrote:
> You could also look into the Autoscaling stuff in 7.x which can be
> programmed to move shards around based on system load and HW sp
You could also look into the Autoscaling stuff in 7.x which can be programmed
to move shards around based on system load and HW specs on the various nodes,
so in theory that framework (although still a bit unstable) will suggest moving
some replicas from weak nodes over to more powerful ones. If
In a mixed-hardware situation you can certainly place replicas as you
choose. Create a minimal collection or use the special nodeset EMPTY
and then place your replicas one-by-one.
You can also consider "replica placement rules", see:
https://lucene.apache.org/solr/guide/6_6/rule-based-replica-plac
On 6/12/2018 9:12 AM, Michael Braun wrote:
> The way to handle this right now looks to be running additional Solr
> instances on nodes with increased resources to balance the load (so if the
> machines are 1x, 1.5x, and 2x, run 2 instances, 3 instances, and 4
> instances, respectively). Has anyone
What does your base hardware configuration look like?
You could have several VM's on machines with higher configuration.
Deepak
"The greatness of a nation can be judged by the way its animals are
treated. Please consider stopping the cruelty by becoming a Vegan"
+91 73500 12833
deic...@gmail.c
We have a case of a Solr Cloud cluster with different kinds of nodes - some
may have significant differences in hardware specs (50-100% more
HD/RAM/CPU, etc). Ideally nodes with increased resources could take on more
shard replicas.
It looks like the Collections API (
https://lucene.apache.org/sol